Can “New Silicon Valley” Survive without Ads?

Silicon Valley Apocalypse

I’ll start by stating something that I thought should be obvious by now, nothing is free, especially when it comes to content and services. I’m not trying to be a Richard when I say things like this; I just feel like most of us are only paying lip service when we talk about valuing people, time, and hard work. We offer euphemisms like, “you can’t get something for nothing,” but when it comes down to it, we’ve all come to expect a lot of things for “free.” About online content and services, a lot of us consider our use of Google Maps, for example, to be free. But it’s not, we pay for the service by turning over our personal information, GPS location data, search history, etc., with all of that data being used to target advertising more accurately.

I want you to stop and think about this for a second, almost every service that’s “free” on a connected device is primarily a tool for selling us more stuff later down the road. I’m not saying this business model is new; I’m merely stating we’re in advertising overdrive since the transition to the digital era, and I’m not sure if it’s sustainable. When I talk to clients about the “New Silicon Valley,” I’m mainly expressing a shift towards the market share first, advertising next, business models that are sweeping through the region, and it’s all based on the perception that we’re getting content and/or services for free.

Over the last decade, a lot of companies have gone public without presenting a legitimate monetization strategy to investors; solely presenting market share numbers for users in the key, but never really profitable, demographic known as Millenials. Each of these businesses ultimately landed on the same business path to profitability – advertising. And with so many companies relying on advertising dollars to keep their metaphorical ships from sinking, I’m not surprised that the emergence of native browser, ad-blockers gave Silicon Valley quite the scare. 

The True Cost of Content

Dollar Bills

The whisper of the idea that companies are going to be forced to live in a world where ads won’t reach the screens of potential consumers sent chills down the spine of Silicon Valley.  If advertising revenue models went away, a lot of your favorite Silicon Valley darlings would plummet back down to earth as if their unicorn wings had been clipped, forcing them to sell their products and services for a hefty fee (Facebook would cost ~$168 year). This situation could be the ultimate demise of the companies, as no one really buys content or services anymore, as a matter of fact, no one really buys anything. I’m not even sure if it’s okay for me to admit that I miss the days when I handed over money and received something tangible in return.

“Between radio, television, print, online, and subscription services, how many advertising dollars are there to go around?”

I’m no Saint when it comes to using advertising as a part of a business model, especially when I’m subsidizing this blog with advertisements (is you see something you like, be sure to click on it), but there is no way there are enough advertising dollars for all of us to survive. It’s not as if producing content can ever be free, regardless of its medium, someone had to pay for it in some way. In the case of this blog, my time was spent writing this; time I could have spent growing other parts of the business, managing employees, or making sales calls.

Not only is my time worth some monetary value (I won’t mention my hourly rate), but not performing other activities in place of this blog also carries its own theoretical loss of value by choosing this activity over another. Unless this blog goes viral, the pennies on the dollar I’ll generate from advertising revenue will never be enough to make up for the cost of creating this content. And it’s for this reason; I would remind all content creators that advertising revenue is supposed to be a subsidy, not a core revenue stream (Google Search being the exception to the rule).

Great Services, Equals Great Profits

Profit Margin

In the midst of the “New Silicon Valley,” we can’t lose sight of the real problem; companies have yet to position their content and services in a manner that validates its monetary value on its merit. A situation that is especially sad when you consider the number of people that helped to create said content and services that go un/underpaid. At some point, the cost of content and services will have to garner enough revenue to sustain the businesses that produce it, leading back to an era when we didn’t consider “software a service.”

“There it is. I don’t believe software is a service –”

I’ve been dancing around calling it out this entire article, but now all the cards are on the table, so I can go hard to close this thing out.

I’m not old enough to call myself “old school” when it comes to service. I wasn’t around for the heydays of personalized service, or have the money to enjoy the convenience of a personal shopper, but one thing I do know is that service usually involves humans. Not software and a touchscreen, but actual human interactions. While software and automation provide vital costs savings to many businesses, they are also diminishing their ability to differentiate themselves from one another. Long term, this is going to be a problem. The only businesses that seem to be flourishing in the digital era, other than a handful of software companies, are those that generate profits through quality service.

In my heart, I believe there only a handful of companies producing content or software that is so unique that you can call them a service, and as the fear of failure looms for the rest of those companies that opted to play the “long game” with profits, they will find their backs against the wall in the coming years. You should start asking yourself, what’s the maximum your willing to pay for Netflix, Spotify, or any other media service? In the next decade, all those companies will have to figure out what that number is if they hope to survive.

 

 

Please follow and like us:
onpost_follow

The Net Neutrality Paradox

Net Neutrality

Concept Explained…

White Board ExplanationFor those who aren’t as familiar with the topic of net neutrality as us hardcore techies, I’m going take a minute to summarize (in Layman’s terms) what it is, and why you should care about it. If you’re reading this post, you’re probably one of the millions of people in the world who access some kind of multimedia via the internet. If you subscribe to NetflixHulu, Amazon Prime, SpotifyApple MusicPandoraDirectv Now, or any other streaming subscription service, you fall into the category of people I’m addressing and should make sure to read this article in its entirety.

If you have one of the subscription streaming services mentioned above, you probably enjoy access to thousands of music and movie titles via an internet connection that’s provided by a cable or phone operator, and until now, that hasn’t been a problem. Since the inception of streaming services, these companies that have been happily providing internet connections to your homes while adhering to a simple principle called Net Neutrality.

The principle goes something like this, as long as you are paying for the broadband service they’ve been providing, whatever you decide to download via that connection is up to you, and the Internet Service Providers (ISPs) won’t interfere with it. But more recently, the content consumers are streaming has inhibited the ability of those same companies to monetize their content, so now they’re lobbying the FCC to remove the rules that formalized the Net Neutrality principles in writing, enabling them to charge more for content coming through their pipelines that originates from competing services.

History Repeats Itself

Infinity Sign

You’ve probably been hearing a lot of techies trying to convince consumers that net neutrality needs to stay in place, and taken at face value, that argument would appear to be correct. But if you dig a little deeper, you’ll see that the abolishment of Net Neutrality could be the best thing for those of us who choose to access our favorite media via the internet. Let me explain.

Due to the fact it’s much easier to change services and equipment between wireless carriers than it is switch between ISPs, the wireless industry has always moved faster than the “wired” industry, and it’s generally pretty safe to look toward them for indications of how strategy changes will affect markets. It’s a bit of a canary in a coal mine situation, which I’ll sure the wireless industry would prefer wasn’t the case, but never the less, here we are.

It wasn’t that long ago that conversations with wireless executives about unlimited data plans resulted in executives stating, with 100% confidence, they would never have to offer unlimited data plans to their customers. Less than five years later things have changed, with wireless agreements including unlimited talk, text, and data, in addition to offering consumers the choice between Netflix, Hulu, and HBO. And now, depending on with whom you sign on the dotted line, a whole range of extras are available because of the addition of a new point of competition.

Unlike with ISPs, wireless providers compete head-to-head in almost every region, and the result of “true competition” has benefited customer across the nation, as the average cost of wireless bills has lowered when compared to 5 years ago. The removal of net neutrality on the wired side of the business would create a point of competition between ISPs similarly to how unlimited data plans affected the wireless industry. This assertion isn’t made without merit, as I remember the days before unlimited wireless plans were everywhere, and cell phone service providers were picking and choosing which particular content to include as a part of “data free” streaming. Can you see the similarity?

The 4K Factor

The availability of Ultra High Definition (UHD or 4K) content will probably be the tipping point for all of this due to the amount of internet bandwidth it requires and the lack of ability for traditional cable providers to offer it. If imposed caps and limitations on streaming content to our homes remain in place (there’s already a 1TB cap), consumers with 4K HDTVs and streaming source content to match will quickly start looking for ISPs that aren’t tacking on extra charges for owning the latest equipment and wanting to take full advantage of its capabilities.

The delivery of 4K content has become a more significant point of emphasis now that HDTV manufacturers have been ushering retailers towards selling UHD televisions in higher proportions than 1080p sets, and the transmission of these signal puts an enormous amount of stress on an aging internet infrastructure that streaming providers like Netflix aren’t responsible for maintaining. We can argue about the fairness of this arrangement later, but for now, we’ve reached the core of the Net Neutrality dilemma.

As the amount of data used to deliver 4K content to homes increases, inevitably, consumers will realize their home internet service plans more closely resemble the restrictive wireless data plans of the past than the newer unlimited data plans of the future. This dilemma will force the cable industry to choose which strategy to pursue in the same manner as the aforementioned wireless carriers.

The Rub

On the one hand, cable and internet providers could leave home internet services and Net Neutrality as it stands, collecting overages whenever customers surpass their streaming limit, hoping they can hold on long enough to figure out their own content system.

On the other hand, they fight to abolish Net Neutrality, unleashing an immediate flurry of competition that will undoubtedly lead down a rabbit hole to including everything but the kitchen sink to maintain video subscribers. So what can they do?

Let me know your thoughts in the comments section and time will tell if any of us had the right answers.

Please follow and like us:
onpost_follow

Has HDMI 2.0 Been Worth the Wait?

HDMI 2.0

HDMI 2.0 Has Arrived

HDMI Infographic

HDMI 2.1 is here, but what does it mean for the average consumer? It’s pretty rare that I meet consumers that are up-to-date on their HDMI specification knowledge, and I wonder if the HDMI Consortium was aware of this fact when they put out their HDMI 2.0 press release. Sometimes, it seems like these press releases are for an “engineers only” meeting, and I get the feeling it takes a blog post, like this one, to explain the practical applications of the new specification. So, I am going to give everyone a rundown of what’s new, and you can decide for yourself if you should be excited or not.

I almost forgot to mention; I am also going to take some time to clear up some misconceptions about HDMI connectors and cables along the way, and this means I will have to cover some basic information that will result in this post reading like a buying guide.

Contrary to popular belief, and Monster Cables’ marketing department, there are not eight different versions of HDMI cables floating around in the marketplace, there are four, and the HDMI Consortium places labels on these cables based their bandwidth (I’m trying to avoid using the word “speed”). From a consumer’s perspective, the bandwidth ratings mainly affect the supported television resolutions, but there are some hidden features bundled in along the way, so you have to pay close attention to get the best performance from a cable.  The four categories of HDMI cables are Standard, High-Speed, Premium High-Speed, and Ultra High-Speed.

The Cable Breakdown

Network Cables

Standard HDMI cables were the first ones that made available to the public; they launched with the original HDMI 1.0 specification, and as such, they primarily support the features that were available through HDMI 1.0 connectors. The most notable aspect of Standard cables is that they do not support 1080p resolution. It was not until the introduction of High-Speed HDMI cabling that consumers were able to enjoy the benefits of 1080p televisions.

High-Speed HDMI cables support the majority of features that customers find on modern-day televisions, so if you bought cables in the last five years, then they are probably High-Speed rated, as most retailers have removed Standard ones from their shelves. Every so often I run into some Standard cables on clearance, in places like Home Depot or Lowes, and my only hope is that customers are not purchasing them while under the impression that all HDMI cables are the same. In addition to 1080p resolution, high-speed cables also added support for 3D HDTVs (not sure if anyone still manufacturers those), x.v.Color (Deep Color), and 4K resolution (2160p).

After reading the list of features supported through High-Speed cabling, and then comparing them to the features available on your current HDTV, you are probably wondering how there are still two more cable ratings to go. I’ll be honest; there is not much of a difference between High-Speed and Premium High-Speed cables. The most notable features deal with unlocking the full potential of 4K content, ultimately showing up as the HDR feature. So while High-Speed cables support 4K content transmissions, if you want the most out of that new television, a new cable purchase may be in order.

Finally, we have arrived at Ultra High-Speed cabling, or as your favorite marketing department calls it, “Future Proof Cabling.” Ultra High-Speed cables support every feature, on every device, currently on the market. They support resolutions up to 10K, although most consumers will likely see 8K as the next logical step in HDTV resolutions, I just wouldn’t hold my breath for either resolution to become widely available shortly (4K still isn’t there yet). These cables also include support for Dolby Vision, other HDR specifications, and Quick Switching, alleviating the blank screen that appears for ~2 seconds while you are switching inputs.

The Connection Breakdown

Audio ReceiverNow that I have taken the time to make sure you are all caught up on cables, it is time to talk about then new HDMI 2.1 connectors. Why? Because that is the topic of this article, but explaining how to enable all of the specification’s features is nearly impossible without making sure you have an understanding of cabling basics. The reason for my concern is that there is no clear correlation between cables and connectors. That’s right, there are only four HDMI cable categories, but there have been roughly seven different types of HDMI connectors released over the last ten years.

The 2.1 specification focuses on tweaking the previously released HDMI 2.0 connector specs, and most of the features are tied up in minute tweaks at an engineering level. There is the Variable Refresh Rate (VBR) feature, reducing the amount of lag higher resolution televisions produces during gaming. There is the aforementioned Quick Media Switching (QMS), reducing the amount of time there is no picture on-screen while switching HDMI inputs. However, it is the ability to transmit resolutions up to 10K that has most manufacturers taking notice.

It should come as no surprise to anyone who covers HDTV sales that software-based features have failed to drive new hardware sales in recent years. Whether we are discussing 3D TV, Smart TV, or HDR, it seems as if the only thing that motivates HDTV enthusiasts to make a new purchase is a discernable change in resolution. After all, the switch to 4K has brought about new competition between content providers, a new type of blu-ray player, and new versions of the most popular gaming systems.

The new 2.1 specifications can usher in a new set of HDTVs, a new disc format, all new cabling, and force content and internet providers to step up their game once again. Consumers should never forget that the goal of a specification change is to drive sales, and when it comes to the new HDMI connectors, consumers cannot realize the potential of their systems without a complete makeover. Now, let’s talk about how to configure all these components.

Configuration Breakdown

What’s often lost in the explanation of HDMI configurations is the comprehension of the lowest common connection. If someone has ever told you that “you are only as strong as your weakest link,” he or she could have been talking about your HDMI setup. When it comes to putting everything together, the features available through HDMI are dictated by the lowest featured cable, or connection, in the chain.

The optimal situation for HDMI 2.1 involves both pieces of equipment having new connectors, linked together with an Ultra High-Speed cable, resulting in every feature being available. In extreme cases, connecting two HDMI 1.3 devices with a Standard HDMI cable will restrict the feature set to those enabled with HDMI 1.0 connectors. The most common situation in most households involves reusing cables or connecting a new television to an out-dated cable box. In scenarios like this, even if your TV has the latest HDMI ports and a new Ultra High-Speed cable securely plugged-in to it, the features available will be restricted by the HDMI 1.1 connector outputting the signal from your cable box.

Hopefully, the infographic associated with this article provides a sufficient aid to understanding everything that I’ve covered, but if it doesn’t, you can always post questions in the comments section. With everything laid out on the table regarding the new HDMI 2.1 specification, I leave it you to decide if upgrading your hardware is worth it. Will you update your disc players, televisions, gaming systems and cables in preparation for 8K/10K content?

Please follow and like us:
onpost_follow

How Election Meddling Saved Digital Marketing

Donald Trump

It’s the Blueprint

BlueprintDigital marketers owe Russia a big thank you, no really, I mean it. If any questions remain about the validity of social media as a means of digital marketing, the recent inquiries into the ads displayed on multiple platforms have clearly illustrated one point – digital marketing works. You see, I work in digital marketing, and I’ve always had to deal with skepticism regarding the effectiveness of the products and services that I sell through Facebook, Google, and other digital platforms, but the recent election scandal related to the 2016 Presidential Election has illuminated some hard to ignore statistics.

Honestly, I couldn’t have run a better campaign than Russia did, it’s almost like they had someone assisting them with how digital algorithms work [Ed Snowden], explaining that both the strength and weakness of digital marketing is that there are no gatekeepers. Some figurative campaign manager was asking everyone all the right questions.

Who is there to validate your intentions when a platform is entirely automated?

Who is there to review the content of ads when they don’t use trademarks?

Who is there to make sure you’re not an adversarial nation running ads to influence political outcomes?

The answer to all of those questions is no one, and you really can’t place the blame on Facebook, Google, and Twitter for that answer, as their platforms precisely delivered what they promised to advertisers. They built an effective, trackable, inexpensive way to reach millions of people, that only has theoretical limits on the number of impressions a single piece of viral content can achieve.

Advertisers have been clamoring for these kinds of tools for years, and if takes a bit of election meddling to get people to stand up and pay attention to the most influential mass communication platforms to ever exist, maybe the resulting discussions will lead us to using them in a more productive way than posting photos of our food.

The Actual Blueprint

Let’s take a look at how a foreign power used digital marketing to run the perfect advertising campaign.

– 1. Find Something People are Passionate About – A lot of companies are on social media because it fills a series of modern-day marketing tool checkboxes that somehow make an organization feel validated as “contemporary.” In the age of Millenials, organizations avoid being grouped with traditional advertising platforms that most of the younger generation think are prehistoric, but the key to a significant social media presence is proximity to subjects that naturally promote discussion, or as Twitter users know it, trolling.

In the case of the “alleged” Russian campaign meddling, the passionate topic is obvious – politics. It’s safe to assume anything that shouldn’t be discussed at work, or at a bar, will generate a vast amount of discussion on social media, so before companies dump a large amount of effort into posting videos, photos, or GIFs, they should make sure their content contains something to be social about.

– 2. Know Your Audience – Thanks to some conveniently placed public voting demographic information, the campaign knew exactly who they were targeting, and used the sophisticated tools offered by social media platforms to hit their targets. Often, the biggest mistake made in digital marketing is something outside of the marketers’ control, the client doesn’t understand their target demographic well enough to achieve the kind of conversions they’re seeking, resulting in a less than optimal impact.

Unlike traditional marketing platforms, which are the metaphorical equivalent to a bullhorn, digital platforms target potential customers with surgical precision, but that precision can only be achieved through the availability of accurate demographic data (you should have analytics installed by now). If an organization has failed to gather a vast amount of reliable data, even the best marketing company will fail to produce the desired results.

– 3. Leverage Organic and Paid Channels – It’s great to be able to pay to get your content in front of potential customers, but the number one rule of advertising still hasn’t changed, “word of mouth is the best advertising.” While the Russian campaign boasts a whopping ~29 million impressions with paid ads on Facebook, it’s the 126 million organic impressions that should floor you.

Social media is the digital version of word of mouth, and when combined with point #1, you can see the effect a passionate group of people can have on digital reach. Users are more receptive to information that appears in their feed if it originates from a friend’s account, so if the content is sporting the “sponsored” moniker, users are less likely to pay attention to what’s on-screen.

–4. Timing is Everything – It’s not enough to sloppily throw ads on the net and expect some big return on investment; great campaigns are run within a particular window for a reason, and proper planning regarding the times and dates they appear makes all the difference. Regarding the timing of this campaign, the “when” is a bit obvious; the ads have to run before election day. Therefore, there was a clearly defined window of opportunity for the effort to be completed.

All too often businesses run ads in windows without statistically backed justification, resulting in minimal impact on their business objectives. Executing a digital campaign isn’t any different than planning a traditional one, so businesses should expect to display ads related to specific events that provide marketing opportunities, like back-to-school, or Christmas.

What We Learned as Marketers

LearningBusinesses aren’t the only entities that should have learned something from the election meddling revelations. Marketers should have gleaned some congressionally mandated insight into each platform and used it to understand the effectiveness of each channel better. Even if you aren’t into the gritty details of each platform, the representatives of those platforms were forced to explain their own data capabilities when testifying, and that was worth a couple of hours of having to watch C-SPAN in and of itself.

If you’re looking to pair with a digital marketing agency who can deliver insights like these to your organization, RTR Digital offers a variety of digital marketing services here.

Please follow and like us:
onpost_follow

The Ugly Truth About Self-Driving Cars

Driving

Great Expectations

Recently, I’ve been reading a lot of articles that are trying to temper the expectations related to autonomous vehicles, and with great satisfaction, I would like to say…it’s about time. If you bothered to read my article about VR being overhyped and underdelivered, you probably noticed I mentioned some other technologies that fall into that same category, and autonomous vehicles are one of them. It’s not that I’m skeptical about the benefits of the technology, I just understand that achieving those benefits are significantly further down the road than anyone wants to admit. If you don’t believe me, keep reading…

According to IHS Automotive, a leader in automobile industry statistics, at the beginning of 2016, “the average age of all light vehicles on the road in the U.S. had climbed slightly to 11.5 years.” Even if fully autonomous cars were available today, America wouldn’t see any significant market penetration for at least a decade, and most of it would be limited to higher socioeconomic areas. To everyone who thought self-driving cars were going to be bobbing and weaving down the streets of their local cities by 2020, you should probably prepare to be disappointed.

You may be asking yourself, why is the timeline is so important? It’s important because of one of the most significant benefit promised through the evolution of autonomous vehicles is related to safety, and achieving it can’t be accomplished until autonomous vehicles comprise ~90% of all cars on the road. Keep in mind that number is my personal calculation, but until self-driving cars make-up a significant portion of vehicles on the road, cities won’t see any significant decrease in the number of automotive accidents that occur every year.

Human Error

Do you know the most common cause of accidents for self-driving vehicles? It’s human error, the same thing it’s always been. Accidents have been happening for the same reason for as long as I can remember. Someone makes a wrong decision, puts other people’s lives at risk, and placing a computer at the helm of one of the vehicles won’t change this fact as long as there are humans on the road with them. Waymo, autonomous vehicle spinoff from Google, stopped reporting their accidents at the beginning of 2018, making it harder for interested parties to keep up with their efforts to remove human error from the roadways, but the good news is, California has archived all of the previous reports on their site if you’re interested in reading through them.

The most common accident type reported were humans rear-ending self-driving cars. Because computers don’t make decisions, they make calculations; autonomous vehicles will ALWAYS run the risk of being plowed into at a yellow light when there are humans in the cars surrounding them. If a computer controlled vehicle can safely stop before the intersection, it will do so, while its human counterparts can be expected to merely say the light was “pink” when they hit the intersection. Human behavior of this type is precisely why autonomous vehicles face such an uphill battle when it comes to public acceptance.

Humans expect the vehicles around them to make decisions in the manner they do, and that means running into the back of a lot of computer-driven vehicles. Running a yellow light is one of the riskiest human driving behaviors on the road, one that we take for granted as we’re driving with other humans, but it’s also one that computers won’t tolerate. Other behaviors, like coming to a complete stop at a stop sign (never happens in California…) and when turning right on a red light, will all lead to accidents between the computer and human-driven vehicles.

Winner Take All

Computers will always strive to provide an element of society that humans can never achieve, perfection, and their achievement of it will only further highlight human imperfections (more accidents). Ultimately, it will be a human that forces a self-driving car to have to choose between saving the lives of its passengers or taking the lives of other drivers. Right now, some engineer is sitting in a room evaluating a Kobayashi Maru scenario that forces a self-driving car to choose lesser of two evils in an unwinnable situation.

For example, a human driver falls asleep and crosses over into oncoming traffic, and someone has to die. Will your self-driving car decide to save its passengers or the passengers in another vehicle? You won’t know the answer to the question until it’s too late. Simply knowing that an engineer has to program a predetermined outcome into a computer for this scenario is already a scary enough thought. What I’m more afraid of is the method that needs to be employed to significantly decrease the chances of any unnecessary carnage happening as a result of these kinds of scenarios.

In a situation where a catastrophic event is inevitable, and death is an assured outcome, the best way to minimize the damage is to make sure all autonomous vehicles react to the situation in the same way. I’ll give you a second to digest that…

“To prevent additional cars from being involved in accidents, all autonomous vehicles on the road should be running the same system so they can anticipate the calculations of other vehicles in their proximity. Think of it as hive mind.”

If a car suddenly blows a tire on the freeway, every autonomous vehicle should avoid the car, in unison, at the same speed, in the same direction, to prevent any unnecessary collisions. If all the cars are running the same system, the other self-driving vehicles on the road won’t need to guess the calculations of the other vehicles involved; they’ll already know what’s going to happen. Instead of having a ten car pile up, the result is a two-car accident, saving more lives in the process.

This aspect of the technology isn’t frequently discussed, but we all know what it means, someone needs to have a monopoly on self-driving vehicle technology. Even though the United States has antitrust laws in place, to truly reach the pinnacle of efficiently concerning autonomous vehicles, only one technology should be implemented. So I’m putting everyone on notice…the self-driving car market is playing a winner take all game, and they should all know that winning is everything.

Please follow and like us:
onpost_follow

A Reality Check for Augmented Reality

Virtual Reality

The Next Big Thing

Augmented Reality

First, there was the IoT, then came wearables, and I can’t remember if virtual reality or self-driving cars came next, but I’m sad to say, none of these will pan out to be worthwhile technology investments (not actual investment advice). All of Silicon Valley’s latest technology flavors of the month have the same undeniable allure of base-metal alchemy. They all revolve around sound theories, like using energy to turn lead into gold, but the amount of energy it takes to make it happen isn’t worth the effort. I recently read an article on CNBC.com that alluded to the fact that industry outsiders are starting to pick on the fact the VR is struggling, and they’ve already counted Facebook’s acquisition of Oculus as a miss for Mark Zuckerberg.

I’m sure Oculus isn’t the only VR headset having a problem living up to the hype because most of us have already set our expectations of the technology somewhere in the upper stratosphere. Since the early 90’s, movies like Lawnmower Man have wowed audiences with the possibilities of a virtual world. If that title is a little bit too obscure for you, we won’t forgo mentioning The Matrix, and if you’re feeling really geeky, you’ll respect my name drop of Sword Art Online. If you’ve seen any of the previously mentioned virtual reality-based movies, you might recognize there is a common element in all of them – “the rig.”

The Rig

The “rig” is a pretty generic term for the contraptions the characters in these flicks strap themselves into while diving into the virtual world. The reason the rig is so crucial in these movies is that they provide a way for these characters to immerse themselves in those worlds without requiring physical movement, something that doesn’t exist today. The ability to effectively enable virtual movement without requiring physical effort is missing a key component to today’s headset based AR/VR units that will limit their commercial success. If you’ve ever used an Xbox Kinect you probably know where this conversation is headed.

For me, the Kinect was as close as any game manufacturer has actually come to producing a fully interactive experience [some us figured out you could still play Wii on the couch], and it opened my eyes to the fallacy that virtual reality represents to the general public. Virtual reality, as it perceived today, is not the next evolutionary step from where Nintendo’s Wii and the Xbox Kinect left us. Those systems were designed to have the appeal of adding physical movement to a traditionally sedentary activity, and if we’re honest, the marketing undertone of “get your potentially obese kid off the couch” was designed to get more parents on board with gaming. Ultimately, what those systems taught us was that we don’t want our virtual experience to require physical movement [not what they intended].

Signs of Exhaustion

Xbox Kinect

The first hint our new found love for immersion may not work out was when game developers had to start labeling how much physical exertion each Kinect game required. I’ve always been in reasonably good shape, but after an hour and a half of Kinect Adventures, I was ready to hit the showers. At the same time, I starting hearing rumors on the internet of people passing out while gaming, and don’t quote me, but I’m pretty sure at least one person died playing the Wii. Either way, we were all reminded why we wanted our virtual worlds to remain separate from our real ones. A lot of the activities we participate in while gaming, are things we are unable to do in real life, so if they start requiring physical movements, you’ll find a lot of us pressing the off button.

A secondary effect of all our newly immersive consoles was an increase in the amount of floor space gaming consumed. Until the Kinect arrived, there was no need to move the coffee table, notify my downstairs neighbors of potential noise, and put on slip-resistant footwear, but now all of those things had to happen before I put the disc in the console. A single player game on Kinect required approximately six feet of space to play, so for apartment dwellers, two-player gaming was mainly out of the question. If this was the kind of space required for the limited in-game movements these games offered, how much space is necessary to reproduce an entire virtual world?

1:1 Movement

One-to-one, say it with me, one-to-one. This ratio is the heart-breaking reality of why the current iterations of VR will never be a success. As of right now, virtual reality has a 1:1 movement ratio, requiring users to move one foot in the real world for every foot they would like to move in the virtual one. This situation compounds every negative aspect of virtual gaming I spoke about in the previous section. Imagine playing a first-person shooter in VR… How much running and jumping does the average avatar do in a single match? Are you planning on doing that in the real world too?

Some newer accessories to VR headsets have illustrated that developers are somewhat aware of the massive problem they are facing, but with each new addition, we get further away from the VR experience we’ve seen in the movies. Oculus and Samsung have both introduced controllers to help alleviate the problem, and I’ve also seen a few custom solutions floating around the internet, but these new accessories introduce a harsh reality that VR/AR may simply be traditional gaming with an expensive peripheral.

The moment a controller is added to the VR experience, gamers become conscious they’re just playing a regular video game while wearing a headset, and the gaming experience returns to people sitting on the couches with controllers in their hands. Until virtual reality develops the ability to “plug you in,” just like they do in The Matrix, I’m afraid the technology will continue to devolve, and I can already see Silicon Valley trying to lower expectations by marketing the experience as “augmented” instead of “virtual.” Without being able to fix the 1:1 problem, I hate to say it, but VR is not the next big thing.

Please follow and like us:
onpost_follow

Why The Internet of Things is Failing

Internet of Things

Reality Check!

Let’s take a second to step back into reality, suspending the influence of the Silicon Valley’s hype machine, and taking the time to analyze the current situation of the Internet of Things (IoT). If I had to give Silicon Valley a grade for how well they’ve influenced consumer’s awareness of the IoT, it would be an “F,” and I don’t think I would be the only person to deliver that evaluation. Overall, Silicon Valley has failed to maintain the excitement around the “The Internet of Things,” with consumers understanding very little about how these connected devices benefit them, and more importantly, not really caring. Before you can convince the world that a network of connected devices is the future of productivity, you first have to convince them of some smaller, more tangible points.

Networking Woes

If you’re [Silicon Valley] going to take on the task of connecting every device known to man, I think it would be a good idea to start with trying to make devices more accessible to connect to a network. My background is littered with networking horror stories from a variety of consumer electronics retailers, and right now, consumer’s frustration level with basic networking could potentially be the single most significant hurdle to a world of connected devices. The “Networking Equipment” category is consistently one of the most frustrating for retailers, with return rates always ranking among the highest in store, and it poses a customer service nightmare for every party involved.

You see, networking is one of the only categories in retail that enlist more than two parties to safely and securely create a home network. A best-case scenario limits the interactions to three entities: the retailer who sold the connected device, the internet service provider (ISP), and the manufacturer of the networking equipment. Any business transaction involving more than two parties opens itself up to a plethora of problems (I’m looking at you Uber), and in this case, three is definitely a crowd.

To alleviate this problem, manufacturers of networking equipment have decided the most natural thing to do is engineer themselves across the finish line (typical response from geeks). Innovation after innovation has been applied to networking equipment, starting with connection wizards, peeking with Wi-Fi Protected Setup (WPS), and sadly ending with auto-connecting mesh routers. Personally, I probably would have given up when consumers weren’t able to figure out one-touch connections with WPS, but the industry keeps trudging along.

Security Woes

When customers can successfully connect the latest generation of connected devices, things haven’t always gone as planned, as demonstrated by a string of highly publicized security breaches dominating the headlines. In October of 2016, one of the most significant internet outages ever witnessed was caused by hacked IoT devices. 2017 was ushered in by the release of Brickerbot, an attack specifically designed to permanently disable poorly designed IoT devices, a process known as “bricking.” All of these security snafus are linked to one specific manufacturing goal – maintaining the bottom line.

Manufacturing IoT devices happen in the manner as any other product, which means manufacturers adhere to the same priorities, with the goal being to manufacture these products at the lowest possible price point; This translates into lower memory capacity, less built-in security, and minimal investment devoted to human IT resources. The result has been a large quantity of these devices shipping with their default configuration and customers who don’t possess the knowledge or patience to change their configurations leaving them as is.

So what’s wrong with the default configuration? To put this into layman’s terms, it’s the equivalent of breaking the first rule of Fight Club. Everyone knows the rule, it should be easy to follow, but it continues to be broken. The first rule of internet security is “never leave your device’s username and password on the default settings,” as doing this creates opportunities for anyone who can read an instruction manual to access the device [you should be thinking about your security right now].

Talking Solutions

I’m not here to bitch-n-moan about the world without offering up some solutions. As a former corporate trainer in the consumer electronics space, I understand the importance of consumer education and how much better off a situation can become by applying a bit of knowledge to it. RTR Digital offers a Networking Basics course at our learning site, RTR Learning, but if you’re not going to enroll, we’ll still provide some basic tips.

Change the Default Settings – Every device is shipped with a default username and password (usually on a sticker on the device) as a way for users to access the setup menu, configure the device, and then change the password so no one else can access the administrator settings. Never leave the username or password set to “admin” or “password1234”.

Name Your Wireless Network Something Abstract – When configuring your wireless network, don’t include any personally identifiable information (e.g., name, street number, house color). If someone is determined to access a network, physical proximity is the key, and associating the network work with its location is giving up way too much information.

Use the Guest Network Feature – Setting up a guest network is a feature on almost all modern networking equipment – use it. The guest network feature enables you to hand out the password to your wireless network without exposing your personal information in the process. Devices on the guest network are given internet access but are in a separate part of the network from the devices connected to the primary network.

Create Unique Passwords – When creating passwords, every device should have a different password, which means you can’t use your kids’ birthdays every time. It may seem like a hassle, but to make it easier, you should come up with a system that enables you to remember them. You can try something like the first three letters of the manufacturer’s name, followed by the purchase month and year of the device.

Ultimately, it’s up to retailers, manufacturers, and consumers to take control of their own responsibilities when dealing with connected devices. There are three parties involved in the future of the IoT, and all three of them have to decide how bad they want the world that is promised to them. Feel free to leave comments or questions on this post, and we’ll be sure to respond.

Please follow and like us:
onpost_follow