You Don’t Care About Facebook Data Privacy

Facebook

 

Data Breach

Face it. You don’t care about how Cambridge Analytica acquired your personal information, if you did, you would have been up in arms about data breaches long before anyone ever outed them for using your Facebook data to assist in an “alleged” election scandal. The truth about what happened with the people, and the data, in the latest instance of internet maleficence, won’t affect you any more than the last ten data breach scandals. As individuals, we’ve become increasingly better at moving on with our lives as if nothing happened, especially if the events in question didn’t directly affect us. So just as you forgot every other outrageous breach of trust related to personal data in less than two months, this too will pass.

 

The reason we can’t hold on to a grudge longer than a couple of months may have something to do with our digital devices. Between glances towards our smartphones, smartwatches, and computer screens, there are real-world events happening out there, or at least that’s what I’ve heard. I’ve been working in the technology sector for a long time, the majority of which, I’ve always been able to maintain the balance between interacting in the digital world and in-real-life (IRL). For those that are slightly younger than me, they only know the former, and that has lead to a world that requires anyone hoping to gain the attention of Millennials to find ways to inject information into the digital realm.

 

As a result of the narrowing number of ways we receive information, society is in the middle of an information war between to distinct forms of media – social media vs. traditional media outlets. The scariest part of the war is that it may have already been fought and lost by everyone involved. Seriously, there may be no winners. This post isn’t just about the Facebook data scandal, it’s about the effect access to instantaneous information can have on society, and it’s a post that comes way too late to make any material change. For those interested in the general well being of their communities, it’s probably a post that will be worth a couple of good conversations at the water cooler, but beyond that, I fear all is lost.

 

Old Money vs. New Money

Burning Money

 

The ability to shape the face of our nation, and any for that matter, has always been controlled through access to information. The era of Gen Xers had a set of clearly defined gatekeepers that influenced the nation in the ways they saw fit. The world of radio, television, and print was dominated by behemoth corporations that changed society through the sounds, images, and words people absorb on a daily basis, and the writers, photographers, and musicians were handpicked to cultivate our thoughts and impressions on critical issues. Before the digital era, these were the only forms of mass communication that existed, and for all intensive purposes, those in power liked it that way. These information gatekeepers acquired power, influence, and what people around me always referred to as “old money.”

 

At one point in my life, I remember everyone around me perpetuating the idea that “old money” was impenetrable. There was nothing that would ever be able to pierce the wall of power and influence the people involved with institutionalized information distribution had built up. After all, how can you start a revolution if you couldn’t spread the word? The idea of other mediums carrying the same weight, and trust level, with the public that companies like CBS or the New York Times garnered was unthinkable. Who would ever put their trust in some random article that appeared on the Internet? No reasonable person, right?

 

Welcome Millennials, a generation of individuals who believe everything they read on the internet. Do you know why? Because they read it…on the internet. Not only does this generation trust what they see and hear on platforms like Facebook, Instagram, and Twitter, the speed it’s delivered at was unthinkable to traditional media outlets ten years ago. As a result, social media sites are the primary source of news for a lot of individuals. There’s now an entire generation of people who expect there information to be customized, instantaneous, and accurate, something “old money” media outlets have yet to truly master.

 

Pay to Play

Money

 

For the first time, companies can distribute information to the masses instantaneously, but how accurate the information is, and who has the right to send it, is what is under real scrutiny in the Facebook scandal. Companies like Google and Facebook have amassed vast quantities of personal information about everyone on the planet, and have decided to let anyone who is willing pay a fee leverage it.

 

The truth about digital marketing platforms is that they are scarily effective at targeting demographic subsections, and narrowing down advertisements by gender, age, location, and interest is only a drop-down menu away for anyone who has the cash. Only now, after the outcome of an election has come under scrutiny, are people seemingly paying attention to the fact that advertising on digital platforms is open to everyone. Where was their outrage when the IPO was lining their pockets? Where was the need for regulation when Facebook ran behavioral experiments in news feeds?

 

Let’s all be honest with ourselves. Our outrage over the current Facebook scandal is one of convenience, little importance, and one that will ultimately change very little. We don’t have the attention spans for drawn-out debates anymore, and because we can’t see how it directly affects our lives, we’ll eventually move on. The only people that care about the outcome of this debate are people who are fighting for power and influence at a level most us will never attain. So I suggest we all move to the next fake social outrage issue because this one isn’t one where we can honestly say we care.

 

Why Virtual Reality is Not the Next Big Thing

High Expectations

VR Experience

 

First, there was the IoT, then came wearables, and I can’t remember if virtual reality or self-driving cars came next, but I’m sad to say, none of these will pan out to be worthwhile technology investments. All of Silicon Valley’s latest technology flavors of the month have the same undeniable allure of base-metal alchemy. They all revolve around sound theories, like using energy to turn lead into gold, but the amount of energy it takes to make it happen isn’t worth the effort. I recently read an article on CNBC.com that alluded to the fact that industry outsiders are starting to pick on the fact the VR is struggling, and they’ve already counted Facebook’s acquisition of Oculus as a miss for Mark Zuckerberg.

 

I’m sure Oculus isn’t the only VR headset having problems living up to the hype because most of us have already set our expectations of the technology somewhere in the upper stratosphere. Since the early 90’s, movies like Lawnmower Man have wowed audiences with the possibilities of a totally virtual world. If that title is a little bit too obscure for you, we won’t forgo mentioning The Matrix, and if you’re feeling really geeky, you’ll respect my name drop of Sword Art Online. If you’ve seen more than one virtual reality-based movie, you might recognize there is a common element in all of them – “the rig.”

 

The “rig” is a pretty generic term for the contraptions the characters in these flicks strap themselves into while diving in the virtual world. The reason the rig is so important in these movies is that they provide a way for these characters to immerse themselves in a virtual world without requiring physical movement, something that doesn’t exist today. The headset-only style of virtual reality is missing a key component to its commercial success, the ability to effectively enable virtual movement without requiring physical effort. If you’ve ever used an Xbox Kinect you probably know where this conversation is headed.

 

Virtual Movement and Real Movement Don’t Mix

Xbox Kinect
Courtesy of Xbox.com

 

For me, the Kinect was as close as any game manufacturer has come to producing a fully interactive experience [some us figured out you could still play Wii on the couch], and it opened my eyes to the fallacy that virtual reality represents to the general public. Virtual reality, as it perceived today, is not the next evolutionary step from where Nintendo’s Wii and the Xbox Kinect left us. Those systems were designed to have the appeal of adding physical movement to a traditionally sedentary activity, and if we’re being honest, the marketing undertone of “get your potentially obese kid off the couch” was designed to get more parents onboard with gaming. Ultimately, what those systems taught us was that we don’t want our virtual experience to require physical movement [not what they intended], and our attention to such devices will eventually wane as time moves on.

 

The first hint our new found love for immersion may not work out was when game developers had to start labeling how much physical exertion each Kinect game required. I’ve always been in fairly good shape, but after an hour and a half of Kinect Adventures, I was ready to hit the showers. I starting hearing rumors on the internet of people passing out while gaming, and don’t quote me, but I’m pretty sure at least one person died playing the Wii. Either way, we were all reminded why we wanted our virtual worlds to remain separate from our real ones. A lot of the activities we participate in while gaming, are things we are unable to do in real life, so if they start requiring physical movements, you’ll find a lot of us pressing the off button.

 

A secondary effect of all our newly immersive consoles was an increase in the amount of floor space gaming consumed. Until the Kinect arrived, there was no need to move the coffee table, notify my downstairs neighbors of potential noise, and put on slip-resistant footwear, but now all of those things had to happen before I put the disc in the console. A single player game on Kinect required approximately six feet of space to play, so for apartment dwellers, two-player gaming was essentially out of the question. If this was the kind of space required for the limited in-game movements these games offered, how much space is required to reproduce an entire virtual world?

 

1:1 Movement

One-to-one, say it with me, one-to-one. This ratio is the heart-breaking reality of why current iterations of VR will never be a success. As of right now, virtual reality has a 1:1 movement ratio, requiring users to move one foot in the real world for every foot they would like to move in the virtual one. This situation compounds every negative aspect of virtual gaming I spoke about in the previous section. Imagine playing a first-person shooter in VR… How much running and jumping does the average avatar do in a single match? Are you planning on doing that in the real world too?

 

Some recent accessories to VR headsets have illustrated that developers are somewhat aware of the massive problem they are facing, but with each new addition, we get further away from the VR experience we’ve seen in the movies. Oculus and Samsung have both introduced controllers to help alleviate the problem, and I’ve also seen a few custom solutions floating around the internet, but these new accessories introduce developers to their own harsh reality. The moment a controller is added to the VR experience, gamers become conscious they’re just playing a regular video game while wearing a headset, and the experience returns to people sitting on the couches with controllers in their hands.

 

Until virtual reality develops the ability to “plug you in,” just like they do in The Matrix, I’m afraid the technology will continue to devolve, and I can already see Silicon Valley trying to lower expectations by marketing the experience as “augmented” instead of “virtual.” Without being able to fix the 1:1 problem, I hate to say it, but VR is not the next big thing.

 

Why the Internet of Things is Failing

The Overhype Machine

Gears Turning
Actual Photo of Machine

 

Let’s take a second to step back into reality, suspending the influence of the Silicon Valley’s hype machine, and taking the time to analyze the current situation of the Internet of Things (IoT). If I had to give Silicon Valley a grade for how well they have influenced consumer’s awareness of the IoT, it would be an “F,” and I don’t think I would be the only person to deliver that evaluation. Silicon Valley has failed to maintain the excitement around the “The Internet of Things,” with consumers understanding very little about how these connected devices will benefit them, and more importantly, not really caring. Before you can convince the world that a network of connected devices is the future of productivity, you first have to convince them of some smaller, more tangible points.

 

If you’re [Silicon Valley] going to take on the task of connecting every device known to man, I think it would be a good idea to start with trying to make devices easier to connect to a network. My background is littered with networking horror stories from a variety of consumer electronics retailers, and right now, consumer’s frustration level with basic networking could potentially be the single greatest hurdle to a world of connected devices. The Networking equipment category is consistently one of the most frustrating for retailers, with return rates always ranking among the highest in store, and it poses a customer service nightmare for every party involved.

 

Networking Woes

Networking Cables

 

You see, networking is one of the only categories in retail that enlist some third parties to safely and securely create a home network. A best-case scenario limits the interactions to three entities: the retailer who sold the connected device, the internet service provider (ISP), and the manufacturer of the networking equipment. Any business transactions that involve more than two parties open themselves up to a plethora of problems (I’m looking at you Uber), and in this case, three is a crowd.

 

To alleviate this problem, manufacturers of networking equipment have decided the easiest thing to do is engineer themselves across the finish line (typical response from geeks). Innovation after innovation has been applied to networking equipment, starting with connection wizards, peeking with Wi-Fi Protected Setup (WPS), and sadly ending with auto-connecting mesh routers. Personally, I probably would have given up when consumers weren’t able to figure out one-touch connections with WPS, but the industry keeps trudging along.

 

Security Woes

Security Warning

 

When customers have been able to successfully connect the latest generation of connected devices, things haven’t always gone as planned, as demonstrated by a string of highly publicized security breaches dominating the headlines. In October of 2016, one of the largest internet outages ever witnessed was caused by hacked IoT devices. 2017 was ushered in by the release of Brickerbot, an attack specifically designed to permanently disable poorly designed IoT devices, a process known as “bricking.” All of these security catastrophes can be linked back to the one specific goal – maintaining the bottom line.

 

Manufacturing IoT devices happen in the manner as any other product, which means manufacturers adhere to the same priorities, with the goal being to manufacture these products at the lowest possible price point. This translates into lower memory capacity, less built-in security, and minimal investment devoted to human IT resources. The result has been a large quantity of these devices shipping to customers with their default configuration, and customers not knowing how, or patience to, change their configurations.

 

So what’s wrong with the default configuration? To put this into layman’s terms, it’s the equivalent of breaking the first rule of Fight Club. Everyone knows the rule, it should be easy to follow, but it continues to be broken. The first rule of internet security is, never leave your device’s username and password on the default settings, as doing this leaves the device accessible to anyone who has ever read the manual for that particular device [you should be thinking about your security right now].

 

Talking Solutions

I’m not here to bitch-n-moan about the world without offering up some solutions. As a former corporate trainer in the consumer electronics space, I understand the importance of consumer education and how much better off a situation can become by applying a bit of knowledge to it. RTR Digital offers a Networking Basics course at our learning site, RTR Learning, but if you’re not going to enroll, we’ll still provide some basic tips.

 

1 – Change the Default Settings – Every device is shipped with a default username and password (usually on a sticker on the device) that is designed as a way for users to access the setup menu, configure the device, and then change the password so no one else can access the administrator settings. Never leave the username or password set to “admin” or “password1234”.

 

2 – Name Your Wireless Network Something Abstract – When configuring your wireless network, don’t include any personally identifiable information (e.g., name, street number, house color). If someone is determined to access a network, physical proximity is the key, and associating the network work with its location is giving up way too much information.

 

3 – Use the Guest Network Feature – If your networking equipment has the guest network feature – use it. The guest network feature will enable you to hand out the password to your wireless network without exposing your personal information in the process. Devices on the guest network are given internet access but are in a separate part of the network than devices that are connected to the primary one.

 

4 – Create Unique Passwords – When creating passwords, every device should have a different password, which means you can’t use your kids’ birthdays every time. It may seem like a hassle, but to make it easier, you should come up with a system that enables you to remember them. You can try something like…the first three letters of the manufactures name…followed by the month and year the device was purchased.

 

Ultimately, it will be up to retailers, manufacturers, and consumers to take control of their responsibilities when dealing with connected devices. There are three parties involved with future of the IoT, and all three of them will have to decide how bad they want the world that was promised to them. Feel free to leave comments or questions on this post, and we’ll be sure to respond.

 

The Future of E-commerce

Coffee is For Closers

Coffee Mug

 

I’m still waiting for Xfinity to add “coffee is for closers” to the voice search capabilities of the X1 system. It’s the quintessential line from Glengarry Glenn Ross and is the perfect summation of the high-pressure sales environment that fuels the top-line for a lot of companies. If you’ve ever been in sales, you’re probably aware that there are a lot more things that are only for closers, like, bonuses, job security, and respect. We used to live in a world that was driven by sales interactions and relationship management, but now it’s all about clicks and two-day delivery.

 

It wasn’t too long after Jet.com launched that I came across an article from TechCrunch stating, that in just one month, it had become the 4th largest online marketplace, and that was a pretty scary thought for me. If I’m to believe everything I’ve been told about how important face-to-face contact and relationship building is to the sales process, please explain to me why Amazon is looking a lot like the 72′ Miami Dolphins, and Jet isn’t too far behind (maybe the 2007 NE Patriots). How can a company with no salespeople and a miniscule brick and mortar presence be the most feared company in retail?

 

Amazon has been blamed for the downfall of almost every major retailer under the sun, and if I operated a brick and mortar location, I would be a more than a little concerned about the latest industry assessments. The number of distressed retailers with a credit rating of CCC continues to increase, and this holiday season could be make-or-break for one of your favorite retailers. Every time it seems like we’ve been at DefCon 1 regarding the state of retail, a “new” strategy emerges to prevent further erosion of profit margins. Will this time be different, or will e-commerce finally deliver the fatal blow to big-box retailers?

 

Fear of Better Options (FOBO)

Fear Text

 

Often overlooked in the emergence of e-commerce are the psychological aspects of being a customer in modern America. There’s a reason manufacturers always want to break into the American retail marketplace; it’s all about volume. America has more retail space per square/foot of land than any other country in the world, and not many other countries have retail chains that have scaled to the same quantities as our biggest and baddest retailers. At their peak, Radioshack operated more than 7000 stores across four countries, Best Buy was operating over 1500 big boxes, and Sears Holdings was operating near that same level with 1300 locations. At one point, getting picked up by any of these retailers meant you had graduated to the big leagues, and you could sit back and watch the money roll in.

 

With a variety of retailers and locations to choose from, customers eventually began to wonder if they were getting the best deal possible. There was a psychological shift that promoted fear of better options at all times. I remember when customers first started printing online ads and bringing them to retail locations where I was working. I also remember store managers doing everything possible to keep us from having to honor those prices. I admit I’ve told more than a few customers that “online products didn’t come with a manufacturer’s warranty (at the behest of my managers)” just to keep from having to price-match.

 

Ultimately, those customers never intended to buy those products online; they simply wanted us to match the price. At the time, customers still wanted the full retail experience; qualifying, demonstrating, and closing, and the only way to get a lot of those high-priced ticket items out of the door involved a retail sales associate. Buying a big-screen TV online was unheard of, the trust in online retailers simply wasn’t there yet, but as products became commoditized, more customers took a chance on e-commerce and were pleasantly surprised by the results.

 

If You Can’t Beat Em…

A secondary effect of product categories becoming commodities was that shopping was becoming more price point driven, and less reliant on services. The idea of a “closer” would eventually lose out to a button that reads “Add to Cart.” As the “Add to Cart” button has become psychologically accepted by more customers, retailers have slowly, but surely, transitioned from fighting the growing e-commerce trend to making sure they aren’t left behind. Since 2015, retailers had been beefing up their online presence and using their stores as more of online pickup locations, than a place where consumers are meant to shop.

 

I used to think that retailers would figure out they need to differentiate the in-store experience from online, but now that I’ve seen some of their budgets, I realize that those options aren’t a real possibility for a lot of them. Most retailers are so strapped for cash right now, getting them to invest in anything that doesn’t directly increase their sales is out of the question, so associate training and development is pretty much a non-starter. Instead of my previous strategy to help use my company’s cutting-edge, digital training capabilities to elevate the in-store experience, I’ve come to the conclusion that it might be time to take the “closer” online.

 

If retailers are slowly abandoning the in-store experience, who am I, as a third-party service provider, to tell them they’re wrong? One of the golden rules of sales is to take the path of least resistance, so instead of focusing our technologies on in-store sales, we went all digital (hence the name RTR Digital). The solution we’ve been developing is an application that serves as online sales companion; we’re calling it the Virtual SalesPerson, or VSP for short. The idea is to provide an e-commerce closer that always asks the right amount of qualifying questions, adds accessories, and always asks for the sale. You see, virtual salespeople have no fear of rejections, never get tired, and never get irritated, making them potentially as effective online, as they’ve been in stores.

 

The Truth About Self-Driving Cars

Car Accident

A State of Disbelief

Recently, I’ve been reading a lot of articles that are trying to temper the expectations related to autonomous vehicles, and with great satisfaction, I would like to say…it’s about time. If you bothered to read my article about VR being overhyped and underdelivered, you probably noticed I mentioned some other technologies as falling into that same category, and autonomous vehicles are one of them. It’s not that I’m skeptical about the benefits of the technology, I just understand that achieving those benefits are significantly further down the road than anyone wants to admit. If you don’t believe me, keep reading…

 

According to IHS Automotive, a leader in automobile industry statistics, at the beginning of 2016, “the average age of all light vehicles on the road in the U.S. had climbed slightly to 11.5 years.” Even if fully autonomous cars were available today, America wouldn’t see any significant market penetration for at least a decade, and most of it would be limited to higher socioeconomic areas. To everyone who thought self-driving cars were going to be bobbing and weaving down the streets of their local cities by 2020, you should probably prepare to be disappointed.

 

You may be asking yourself, why is the timeline is so important? It’s important because of one of the greatest benefits promised through the evolution of autonomous vehicles is related to safety, and it can’t be achieved until autonomous vehicles comprise ~90% of all cars on the road. Keep in mind that number is my personal calculation, but until self-driving cars make-up a large portion of vehicles on the road, cities won’t see a significant decrease in the number of automotive accidents that occur every year.

 

Not as Safe as You Think

Finger PointingDo you know the most common cause of accidents for self-driving vehicles? It’s you! Accidents have always happened for the same reason, someone makes a bad decision and puts other people at risk, and placing a computer at the helm won’t change this fact as long as there are humans on the road with them. Waymo, the newly branded autonomous vehicle spinoff from Google, stopped reporting their accidents at the beginning of this year, making it harder for interested parties to keep up with their efforts to remove human error from the roadways, but the good news is, California has archived all of the previous reports on their site if you’re interested in reading through them.

 

The most common accident type reported were humans rear-ending self-driving cars because computers don’t make decisions – they make calculations. Autonomous vehicles will NEVER run a yellow light if they can safely stop before the intersection. It’s the human in the vehicle behind them, expecting a similar decision-making process from a driver in front of them, that ends up running into the back of a computer-driven vehicle. Running a yellow light is one of the riskiest human driving behaviors on the road, one that we take for granted when we’re driving with other humans, but it’s also one that computers simply won’t tolerate. Things like coming to a complete stop at a stop sign (never happens in California…California Roll in effect) and when turning right on a red light will all lead to accidents with human-driven vehicles.

 

Computers will always strive to provide an element of society that humans can never achieve, perfection, and their achievements will only further highlight human imperfections (more accidents). Ultimately, it will be a human that forces a self-driving car to have to choose between saving the lives of its passengers or taking the lives of other drivers. Right now, some engineer is sitting in a room evaluating a Kobayashi Maru scenario that forces a self-driving car to choose lesser of two evils in an unwinnable situation. For Example, a human driver falls asleep and crosses over into oncoming traffic, and someone has to die. Will your self-driving car decide to try to save its passengers or the passengers in another vehicle? You won’t know the answer to the question until it’s too late.

 

Winner Take All

 

“To prevent additional cars from being involved in accidents, all autonomous vehicles on the road should be running the same system so they can anticipate the calculations of other cars in their proximity. Think of it like a hive mind.”

 

Knowing that a computer will have to choose whether I live or die is already a scary enough thought, but I’m more afraid of the method that needs to be employed to significantly decrease the chances of any unnecessary carnage happening as a secondary outcome. In a situation where a catastrophic event is inevitable, and death is an assured outcome, the best way to minimize the damage is to make sure all autonomous vehicles react to the situation in the same way. I’ll give you a second to digest that… To prevent additional cars from being involved in accidents, all autonomous vehicles on the road should be running the same system so they can anticipate the calculations of other cars in their proximity. Think of it as a hive mind.

 

If a car suddenly blows a tire on the freeway, every autonomous vehicle should avoid the car, in unison, at the same speed, in the same direction to prevent any unnecessary collisions. If all the cars are running the same system, the other self-driving cars on the road won’t need to guess the calculations of the other cars involved, and they’ll already know what’s going to happen. Instead of having a ten car pile up, the result is a two-car accident, and more lives will be saved in the process.

 

This aspect of the technology isn’t frequently discussed, but we all know what it means, someone needs to have a monopoly on self-driving vehicle technology. Even though the United States has antitrust laws in place, to truly reach the pinnacle of efficiency about autonomous vehicles, only one technology should be implemented. So I’m putting everyone on notice…the self-driving car market is playing a winner take all game, and they should all know that if they don’t win, they’ll lose everything.

 

A Final Warning on Net Neutrality

Green Light

 

“Stupid is as Stupid does”

Sometimes the most intelligent people can make the dumbest mistakes. Most intelligent people acquire knowledge in the same manner as the rest of us, through trial and error, but it typically takes them less time to understand the practical applications of what they’ve learned to apply it in a useful manner. Other times, it can take a bit “mansplaining” before even the most intelligent people comprehend what should be an obvious lesson. To alleviate any possibility of the cable industry looking back at their decisions regarding net neutrality, and claiming there was no way to have predicted the outcome, I am kindly going to deliver some free “mansplaining” before they make the biggest mistake in the history of media companies. I am going to state this as clearly as possible. LEAVE NET NEUTRALITY ALONE!

 

For those who are not as familiar with the topic of net neutrality as us hardcore techies, I am going take a minute to summarize (in Layman’s terms) what it is, and why you should care about it. If you are reading this post, you are probably one of the millions of people in the world who access some kind of multimedia via the internet. If you subscribe to Netflix, Hulu, Amazon Prime, Spotify, Apple Music, Pandora, Directv Now, or any other streaming subscription service, you fall into the category of people I am addressing and should make sure to read this article in its entirety, or at least far enough to get pissed off. I am taking the time to warn big cable not to do something that would ultimately make your life easier in the long term, so it is okay if you are a little angry (be sure to leave your angry quotes in the comments section).

 

If you have one of the aforementioned subscription streaming services, you probably enjoy access to thousands of music and movie titles via an internet connection that’s provided by a cable or internet operator, and until now, that has not been a problem. Since the beginning of streaming, these companies that have been providing internet connections to your homes while adhering to a simple principle called Net Neutrality. The policy goes something like this, as long as you are paying for your internet connection, whatever you decide to download via that connection is up to you, and the internet companies do not influence how that content is delivered. Recently, the content consumers are choosing to stream has inhibited the ability of those same cable and internet companies to monetize their content, so now they are lobbying the FCC to remove the rules that have created the Net Neutrality policy. This minute change enables them to charge more for content coming through their pipelines that originates from competing services; therefore, shifting the competitive landscape in their favor.

 

History Repeats Itself

You have probably been hearing many techies trying to convince consumers that net neutrality needs to stay in place, and taken at face value, that argument would appear to be correct. However, if you dig a little deeper, you will see that the abolishment of Net Neutrality could be the best thing for those of us who choose to access our favorite media via the internet. Just in case you are wondering how the abolishment of Net Neutrality could ever work out in the consumer’s favor, all you have to do is take a look at the current state of the wireless carrier industry.

 

In the wireless industry, changing service providers has always been less of a hassle for customers than it is on the wired (home services) side of the service industry. Due to the fact this transition has always been more comfortable for consumers to make, wireless carriers have serviced as a bit of a canary in a coal mine for the wired industry, serving as a testing ground for acquiring and maintaining services. The battle over data has been going on for the past decade for wireless providers, but for cable and internet operators, we are at the beginning of mass erosion pricing.

 

It was not that long ago that conversations with wireless executives about unlimited data plans resulted in executives stating, with 100% confidence, they would never have to offer unlimited data plans to their customers. Less than five years later things have changed, with wireless agreements including unlimited talk, text, and data, in addition to offering consumers the choice between Netflix, Hulu, and HBO, depending on with which carrier they decide to sign on the dotted line. So, how did we go from one end of the spectrum entirely to the other in less than five years? It all revolves around competition related to the price point of data.

 

Make no mistake, talk and text are ones and zeros just like any other form of data, but streaming music and videos via applications puts a strain on wireless networks for which they were unprepared. Providers initially thought to charge by the kilobit of data, but eventually realized the cost passed on to consumers was prohibitive, ultimately slowing down their ability to sell more advanced devices. This aspect of their business created a point of competition between providers that previously was nonexistent. The newly sparked competition to regarding data price points resulted in something I rarely get to reference outside of hyperbolic arguments – a slippery slope.

 

It all started with T-Mobile.

T-Mobile was the first to break rank to acquire new customers. They began including music streaming services that don’t count against their consumer’s data allotment to draw them from more expensive providers. It was not long before other wireless providers were forced to follow suit, and even one-up one another, by including additional popular subscription services in their data plans as well. As the slope got slipperier, it did not take long for the first provider to declare their plans as having unlimited data. Sprint was the first to announce (in case you were wondering), and the era of “truly unlimited” (still getting throttled) data plans began.  So what does all of this have to do with Net Neutrality?

 

Contrary to popular belief, cable companies do not really compete with each other. If you look at a map of how providers are laid out, you will find there are only a few locations where consumers have the choice of more than one cable provider. Ultimately, this means that there are few aspects of service that are competitive and lead to better pricing and service for your average consumer. Thus, you should probably be rooting for Net Neutrality to go away, because if it does, cable and internet companies will more than likely start offering unlimited data plans to homes, and may even be forced to throw in some free subscriptions services as well.

 

As of right now, most consumers are under the impression they are getting unlimited data when they sign-up for home internet, but this is not really the case, most home internet services have a 1 Terabyte monthly data limit. Because most customers never reach that limit, they are relatively unaware of its existence, even though their provider has probably already notified them about it. These limits were preemptively put in place as streaming services like Netflix have beaten traditional cable providers to the punch when it comes to delivering Ultra High-Definition (4K) content to homes.

 

The delivery of 4K content places the same strain on wired infrastructure that music and video previously placed on wireless networks. As 4K video has become a point of emphasis, for retailers and content providers, the ability to transmit this type of the content has become a differentiator for service providers. While companies like Netflix are already providing 4K video to homes via the internet, they are not responsible for maintaining the network that transmits it. We can argue about the fairness of this arrangement later, but for now, we’ve reached the core of the Net Neutrality dilemma.

 

What Now?

As the amount of data used to deliver 4K content to homes increases, inevitably, consumers will realize their home internet service plans more closely resemble the restrictive wireless data plans of the past than the newer unlimited data plans of the future; creating the same dilemma the aforementioned wireless carriers were in just five years ago. On the one hand, cable and internet providers could leave home internet services and Net Neutrality as they stand, collecting overages whenever a customer surpasses their limit, or they can cave now and start offering consumers the easier paths to get 4K content.

 

The former of the strategies involve trying to hold on long enough to figure out their delivery system, hoping other cable companies hold the line (proving they were never really competing in the first place) and don’t start down the slippery slope of offering higher data limits to gain market share. Alternatively, they can abolish Net Neutrality now, unleashing an immediate flurry of competition that will undoubtedly lead down the rabbit hole to including everything but the kitchen sink to maintain video subscribers.

 

Let’s be real…there is nothing these cable and internet providers can do to stop what is coming, the best they can hope for is to provide the best customer experience to keep as many subscribers as they can. This situation was bound to emerge one way or the other, and in all honesty, it is a lose/lose for all of them. The real opportunity lies in their ability to do something they have never done before…give customers what they want without having to pry it out of them with a crowbar. Go unlimited now, and save us all some time and effort.

 

How Russia Saved Digital Marketing

The Digital Marketing Blueprint

Digital marketers owe Russia a big thank you, no really, I mean it. If there were any questions remaining about the validity of digital marketing, the recent inquiries into the ads displayed on multiple platforms have clearly illustrated one point – digital marketing works. You see, I work in digital marketing and I’ve always had to deal with skepticism regarding the effectiveness of the products and services that I sell through Facebook, Google, and other digital platforms, but the recent election scandal has brought some hard to ignore statistics into the light.

 

Blueprint
The Blueprint

To be honest, I couldn’t have run a better campaign than Russia did, it’s almost like they had someone assisting them with how digital algorithms work [Ed Snowden], explaining that the strength and weakness of digital ads are that there are no gatekeepers. Who is there to validate that you are an American citizen when the ad platform is international? Who is there to review the content of ads when they don’t use trademarks? Who is there to make sure your not a U.S. adversarial nation running ads to influence political outcomes? The answer to all of those questions is no one.

 

You really can’t place the blame on Facebook, Google, and Twitter, as they delivered exactly what they promised to advertisers. They built an effective, trackable, inexpensive way to reach millions of people, that only has theoretical limits on the number of impressions a single piece of viral content can acquire. Advertisers have been clamoring for this for years, and if it takes a bit of election meddling to get people to stand up and pay attention to the most influential mass communication platforms to ever exist, maybe the resulting discussion will lead us to using them in a more productive way than posting photos of our food. Let’s take a look at how a foreign power used digital marketing to run the perfect advertising campaign.

 

1. Find Something People are Passionate About – A lot of companies are on social media just to be on social media, and having these accounts revolves around a series of checkboxes that keep the social media coordinator employed. The key to a great social presence is proximity to subjects that naturally promote discussion, or as Twitter users know it, trolling. In the case of the Russian campaign, the passionate topic is obvious – politics. It’s safe to assume anything that shouldn’t be discussed at work, or at a bar, will generate a vast amount of discussion on social media. Before companies dump a large amount of effort into posting videos, photos, or GIFs, they should make sure their content contains something to be social about.

 

2. Know Your Audience – Thanks to some conveniently placed public voting demographic information, the Russians knew exactly who they were targeting, and they used the sophisticated targeted tools offered by social media platforms to hit their targets. Often times, the biggest mistake made in digital campaigns is something outside of the marketers’ control…the client doesn’t understand their target demographic well enough to achieve the kind of conversions they’re seeking when running a campaign. Unlike traditional marketing platforms, which are the metaphorical equivalent to a bullhorn, digital platforms target potential customers with surgical precision, but that precision can only be achieved through the availability of accurate demographic data (you should have analytics installed by now).

 

3.Leverage Organic and Paid Channels – It’s great to be able to pay to get your content in front of potential customers, but the number one rule of advertising still hasn’t changed – word of mouth is the best advertising. While the Russian campaign boasts a whopping ~29 million impressions with paid ads on Facebook, it’s the 126 million organic impressions that should floor you. Social media is the digital version of word of mouth, and when combined with point #1, you can see the effect a passionate group of people can have on digital reach. Users are more receptive to information that appears in their feed if it originates from a friend’s account, so if the content is sporting the “sponsored” moniker, users are less likely to pay attention to what’s on-screen.

 

4.Timing is Everything – It’s not enough to sloppily throw ads on the net and expect some big return; great campaigns are run within a certain window for a reason, and proper planning regarding the times and dates they appear them can make all the difference. Related to the election, the timing is a bit obvious, everything had to be displayed before election day and there was a clearly defined window of opportunity for marketing. All too often businesses run ads in windows without statistically backed justification. Planning a digital campaign isn’t any different than planning a traditional one, so businesses should plan to display ads in relation to specific events like back-to-school, or Christmas.

 

What We Learned as Marketers

Businesses aren’t the only entities that should have learned something from the election meddling revelations. Marketers should have gleaned some congressionally mandated insight into each platform and used it to better understand the effectiveness of each channel. Based on testimony in Congress, we should all know our preferred social media platform for ads, and that platform is Facebook. Personally, I’ve always believed Facebook had much more insight into their own platform than competitors, but Twitter’s inability to deliver concrete statistics related to the number of accounts, impressions, and impact of the Russian content, really drives that point home for me.
 
I haven’t mentioned Google too much during this post because most of the congressional inquiries are focused on social media, and not search advertising (which is really Google’s specialty). If there is a platform that I hold in higher esteem than Facebook, it would definitely be Adwords, as its reach is wider than the users of a particular social network. Ads displayed through Google reach anyone using their search engine, and that’s just about everyone on the planet. Their delivery of precise advertising data in relation to Russian ads was impressive, but it’s offset by the fact these impressions were all of the paid variety. If there was one thing that Google definitely missed out on, it was having a solid social media platform, and those organic impressions generated by Facebook speak volumes about the impact of viral content.
 
Ultimately, I don’t imagine much will change related to the long-term outlook of any of these companies based on what’s discovered through these inquiries, but I do think it’s something digital marketers can develop to make a case for digital marketing spends. It may seem shallow to turn something as serious as election meddling into a capitalistic approach to selling advertising, but isn’t that what living in a free democratic society is about? Otherwise, why are we voting?

Has HDMI 2.1 Been Worth the Wait?

HDMI Cable
HDMI Cable

HDMI 2.1 is here, but what does it mean for the average consumer? It is a pretty rare occasion that I meet consumers that are up to date on their HDMI specification knowledge, and I wonder if the HDMI Consortium is aware of this fact when they put out press releases. Sometimes, it seems like these press releases are for an engineers only meetings, and I get the feeling it takes a blog post, like this one, to explain the practical applications of the new specification. So, I am going to give everyone a rundown of what’s new, and you can decide for yourself if you should be exited or not.

 

I almost forgot to mention; I am also going to take some time to clear up some misconceptions about HDMI connectors and cables along the way, and this means I will have to cover some basic information that will result in this post reading like a buying guide.

 

Contrary to popular belief, and Monster Cables’ marketing department, there are not eight different versions of HDMI cables floating around in the marketplace, there are four, and the HDMI Consortium places one of four labels on these cables based their bandwidth (I trying to avoid using the word “speed”).

 

From a consumer’s perspective, the bandwidth ratings mainly affect the supported television resolutions, but there are some hidden features bundled in along the way, so you have to pay close attention to get the best performance from a cable. The four categories of HDMI cables are Standard, High-Speed, Premium High-Speed, and Ultra High-Speed.

 

The Cable Breakdown

HDMI Cable Comparison
Cable Overview

Standard HDMI cables were the first ones that were made available to the public; they launched with the original HDMI 1.0 specification, and as such, they primarily support the features that were available through HDMI 1.0 connectors. The most notable aspect of Standard cables is that they do not support 1080p resolution. It was not until the introduction of High-Speed cables that consumers were able to enjoy the benefits of 1080p televisions.

 

High-Speed HDMI cables support the majority of features that customers find on modern-day televisions. If you bought cables in the last five years or so, then they are probably High-Speed rated, as most retailers have removed Standard ones from their shelves. Every so often I run into some Standard cables on clearance in places like Home Depot or Lowes, and my only hope is that customers are not purchasing them while under the impression that all HDMI cables are the same. In addition to 1080p resolution, high-speed cables also added support for 3D HDTVs (not sure if anyone still manufacturers those), x.v.Color (Deep Color), and 4K resolution (2160p).

 

After reading the list of features supported through High-Speed cabling, and then comparing them to the features available on your current HDTV, you are probably wondering how there are still two more cable ratings to go. I’ll be honest; there is not much of difference between High-Speed and Premium High-Speed cables. The most notable features deal with unlocking the full potential of 4K content, ultimately showing up as the HDR feature. So while High-Speed cables support 4K content transmissions, if you want the most out of that new television, a new cable purchase may be in order.

 

Finally, we have arrived at Ultra High-Speed cabling, or as your favorite marketing department calls it, “Future Proof Cabling.” Ultra High-Speed cables support every feature, on every device, currently on the market. They support resolutions up to 10K, most consumers will likely see 8K as the next logical step in HDTV resolutions, but I would not hold my breath for that content to become widely available (4K still isn’t there yet). These cables also include support for Dolby Vision, another HDR specification, and Quick Switching, alleviating the blank screen that appears for 2 seconds while you are switching inputs.

 

The Connection Breakdown

 

Now that I have taken the time to make sure you are all caught up on cables, it is time to talk about then new HDMI 2.1 connectors. Why? Because that is the topic of this article, but explaining how to enable all of the specification’s features is nearly impossible without making sure you have an understanding of cabling basics. The reason for my concern is that there is no clear correlation between cables and connectors. That’s right, there are only four HDMI cable categories, but there have been roughly seven different types of HDMI connectors released over the last ten years.

 

The 2.1 specification focuses on tweaking the previously released HDMI 2.0 connector specs, and most of the features are tied up in minute tweaks at an engineering level. There is the Variable Refresh Rate feature, reducing the amount of lag higher resolution televisions produces during gaming. There is aforementioned Quick Media Switching (QMS), reducing the amount of time there is no picture on-screen while switching HDMI inputs. However, it is the ability to transmit resolutions up to 10K that has most manufacturers taking notice.

 

It should come as no surprise to anyone who covers HDTV sales that software-based features have failed to drive new hardware sales in recent years. Whether we are talking about 3D TV, Smart TV, or HDR, it seems as if the only thing that motivates HDTV enthusiasts to make a new purchase is a discernable change in resolution. After all, the switch to 4K has brought about new competition between content providers, a new type of blu-ray player, and new versions of the most popular gaming systems.

 

The new 2.1 specifications can usher in a new set of HDTVs, a new disc format, all new cabling, and force content and internet providers to step up their game once again. Consumers should never forget that the goal of a specification is to drive sales, and when it comes to the new HDMI connectors, consumers will never realize the potential of their systems without a complete makeover. Now, let’s talk about how all these components are configured.

 

Configuration Breakdown

 

What’s often lost in the explanation of HDMI configurations is the comprehension of the lowest common connection. If someone has ever told you that “you are only as strong as your weakest link,” he or she could have been talking about your HDMI setup. When it comes to putting everything together, the features available through HDMI are dictated by the lowest featured cable, or connection, in the chain.

 

The optimal situation for HDMI 2.1 involves both pieces of equipment having new connectors, linked together with an Ultra High-Speed cable, resulting in every feature being available. In extreme cases, connecting two HDMI 1.3 devices with a Standard HDMI cable will restrict the feature set to those enabled with HDMI 1.0 connectors. The most common situation in most households involves reusing cables or connecting a new television to an out-dated cable box. In scenarios like this, even if your TV has the latest HDMI ports and a new Ultra High-Speed cable securely plugged-in to it, the features available will be restricted by the HDMI 1.1 connector outputting the signal from your cable box.

 

With everything laid out on the table regarding HDMI 2.1 connectors, I leave it you to decide if upgrading your hardware is worth it. Make sure to leave your comments on how you perceive the value of the new specification. Will you update your disc players, televisions, gaming systems and cables?

Apple Slowed Your Phone – What Now?

Mobile Phone
It’s not over yet…

I have sat around and listened to information regarding Apple’s iPhone slowdown issue long enough to call bullsh!t on Apple. It is not that their engineering reasoning does not make sense…because it does, it is the fact that they would have to be able to see the future through some crystal ball to anticipate a problem like this one and effectively engineer a software solution that bothers me. Yes, I said a software solution. While everyone is out getting their batteries replaced in good faith, he or she might have overlooked the fact that a “solution’ to this problem has to be applied where the problem was created – in iOS. Thus, a battery replacement alone will not bring the speed back to your iPhone, and this is what makes me question the motives surrounding this entire situation.

 

Realistically, I would be more likely to believe Apple’s statements about why they slowed older phones if this was their first offense, but it’s not, and their fanbase is so rabid about their favorite little device they tend to forget the other times Cupertino’s darling as executed similar plans to boost profits. In 2012, Apple changed the iPhone charging port from its original 30 pin connector to the current lightning port with the promise of added features. What consumer received was frayed/defective cables and relinquished the ability to use most third-party charging equipment, while Apple received a boost in adapter sales and branded “Genuine Apple” accessories. The result was class action lawsuit claiming Apple was aware of what they were doing at the time and chose not to inform customers of the outcomes.

 

In 2014, it was an upgrade to iOS 8 that caused an uproar from the fanbase. Massive amounts of people were unable to upgrade to the latest mobile operating system because of insufficient memory in their devices. Apple squashed the rebellion with an iTunes workaround that made sure those lower capacity phones would be able to get the latest version of iOS. The new iOS surprised fans with a phone that had so little internal memory left over after the installation; the new system might as well have bricked (technical term) their phone. Moreover, once installed, the operating system cannot be rolled back, leaving anyone who made an effort to upgrade on lower capacity phones SOL. Maybe this set of class action lawsuits filed as a result will give customers a bit more closure than the 2012 ensemble.

 

Even with Apple’s suspicious history in mind, I was still willing to concede benefit of the doubt because their statements about the effects of long-term battery usage are spot-on. Every time a phone opens, switches, powers-on applications, or performs a slew of other functions, it requires a power surge from the battery, and as batteries age, they lose the ability to deliver those surges. The easiest way to for most consumers to comprehend this is to think of their phones as cars, and the batteries that run them as gas tanks. Certain aspects of driving use significantly more gasoline than others, just like certain aspects of cell phone usage require more power than others.

 

For example, starting your car involves a simultaneous surge of gas to every component involved in the ignition process. The same is true for phones during startup; every electronic component needs to obtain a charge simultaneously to boot your operating system, requiring significantly more power than regular operations. Quick accelerations to switch lanes are like switching between applications, a sudden change in position also requires a significant, sudden increase in power. While there is a long list of car-related analogies that can explain sudden power surges, I think you get my point, not all battery usage is the same. So yes, it makes sense to slow the speed the processor to accommodate for aging batteries.

 

Okay, so if the reasoning for the slowdown is legitimate, why am I calling bullsh!t? It’s because of the method of deployment. iOS can only be engineered to deploy something like this in a couple of ways, both of which make me think this slowdown had nothing to do with concern for consumers and more to do with profits. The hardware, in this case, the battery, doesn’t have a sophisticated method for reporting its status to the operating system. There is not a self-testing mechanism, no internal clock, and no sensors on a battery, so unless Apple is hiding some new battery tech I’ve never heard of, iOS has to the culprit (Apple has admitted as much).

 

If you are a computer geek, Android superuser, or the Last Digital Jedi, you probably understand how adjusting the processor speed of a device has a dramatic effect on its power consumption. Geeks refer to this as “overclocking,” or “under-clocking,” depending on the direction of the adjustment. If you’ve ever executed the process, you are probably aware that it’s done through software adjustments and not through the power supplies themselves. So what? Why is this one little detail so nefarious when it comes to the Apple iPhone slowdown situation? It’s because firmware would have to have this hidden function from the very beginning, or have been slipped into a recent update to execute the slowdown maneuver. Here are the likely scenarios in which this process was executed.

 

One, every processor is adjusted differently, so iOS would need to figure some method of identification to slow down the appropriate devices, and that is possible by looking at a device’s internal chip model. More than likely, the operating system is looking for the Apple A8 and A9 chipsets and reducing their processing power to conserve battery usage. The problem with this method is that devices containing these chipsets are still sold as new, and there would be no way of distinguishing devices purchased yesterday from those purchased three years ago. This kind of execution would be horrible for Apple customers; it would mean that a newly acquired iPhone 6s, or SE, would still result in a slow device regardless of its age. Nefarious indeed…

 

Two, Apple could be using the device’s internal clock to determine the phones age. Based on when the phone was activated, a calculation could evaluate the device’s age, enabling iOS to reduce the speed of the processor. If this is the case, the speed reduction related to a predetermined timetable that Apple put in place years ago and would have nothing to do with the condition of the phone’s battery. The result for Apple customers is still the same, implying that Apple has been planning this for years and never notified any of their customers of the impending doom.

 

There as still some other ways to execute something like this, but all of them still implies a certain amount of nefarious behavior on Apple’s part, so I’ll skip to the summary. This is simply a bad look for Apple. Even changing out the battery won’t return your phone to its previous glory days, something has to be done in the OS to unleash the processing power once again. If a firmware patch is issued, for those with new batteries, they’ll be back in action, and for those without a replacement, they’ll experience some new issues with their devices. Ultimately, the result is already the same as the other instances I mentioned, with class action lawsuits already in motion, but I can’t positively say if this time is going to be any different. The Apple faithful remain the Apple faithful, so maybe all these keystrokes have been for not. Let me know what you think.