I wrote this in July 2020. I never published it before. I was just waking up to the reality of the technologies I’d spent so much time learning and practicing, and I wrote this as a way to organize my head around it.
I was also afraid.
These companies are powerful. Just look at the deplatforming of Parler in January. When I wrote this, the idea of a cloud provider committing a B2B contract violation for ideological purposes was completely theoretical. Yet, here we are.
I’m not afraid anymore.
Following the July 16, 2020 Twitter “Hack” scandal, host Dan Bongino asked on his show, “How much power do these people really have?” Now, Dan was referring to Twitter insiders, but you can equally ask the question of any technology company.
Let me pose a few questions: If someone was monitoring your every movement, would you mind? If they were using that data to influence your behavior, attitudes and your choices, would that bother you? And what if I then told you that there was nothing you could do about it?
That’s where we are. We have allowed Big Tech too much power with too little oversight for far too long. And with that deficit in place, we draw ever-closer to the effective disruption of the coming, real Digital Revolution.
Think Huxley’s, “Brave New World.” We’re nearly beyond Orwell’s, “1984.”
Privacy is a historical relic. Free will is a farce. Meaningful oversight is mythology.
This is a highly technical subject matter, so I will attempt to paint an understandable picture with broad strokes. I have provided links to source material, but a little imagination is also required, here, as I want you to take these facts and imagine where the future might go. Let’s dive in.
Data and Technology in 2020
Technology capabilities have exploded over the past 20 years, and data analytics capabilities are keeping pace. Just 10-15 years ago, it was nearly impossible to find any business value or insight in the massive amount of unstructured data that companies were generating. Most of this data (think log files, event streams, and cached data from an app or the Web) isn’t considered personally identifiable information (PII), so its governance and regulations are more relaxed than those for, say, your healthcare information or social security number. For the data that is PII, the law regulates how it can be gathered, stored and used; yet in the digital world, there is no real way to keep companies from gathering PII. Therefore, the 2020 regulatory compliance landscape focuses on masking PII when it comes into their systems. Masking. Familiar term.
For fun, let’s explore this idea with video games. Now, I haven’t really played video games since I mastered “1080” in the late 1990’s, but I have three minor boys who, especially during lockdown, can’t get enough. As their mom, of course, I worry. Gaming addiction and its impact on the pleasure centers in the brain is well-documented.
But with the crumbling state of the world and the lightning speed of technology innovation, it is the DATA of video games (and social and mobile) that literally keeps me awake at night.
My son plays Fortnite on Xbox. Let’s say he plays 10 rounds. In each of those rounds, his behaviors are tracked and logged. What behaviors? All of them: every time he shows empathy or shows no mercy, every time he chooses one weapon or power up over another, how he interacts with other players in the game. All tracked and logged automatically by data tools and automation built into the game.
David Smith explained this eloquently in his December 09, 2013 piece, “The impact of Big Data on video gamers,” when he said: “Because video games happen in a virtual world, it’s possible to measure just about every aspect of the game. It’s kind of like being able to observe a sports match or a battle, but being able to attach a telemetry sensor to every player, every weapon and bullet, every surface of the environment, and gather all that data in real time. Big Data…has made this possible, and video game companies routinely gather 50 terabytes of data per day to improve their games, operations and revenue.”
Consider the amount of data that would be generated from my son’s 10 rounds of Fortnite (and then consider the data from the hundreds of rounds he actually plays).
Everyone he fought or killed and the details of those encounters, everything he built and where and why, his main motivators and biggest fears, and so much more can be extrapolated from this data set and subsequent analysis.
This data is what game developers (and developers in general) use to improve upon their products. They see how people are using the product, their choices, patterns and more, and then they study that information and those insights, and they use them to make the next version better. I personally know a game developer; his number one goal is to make the best gaming experience possible, and he is very serious in the pursuit of that goal.
As Patrick Stafford noted in his May 2019 piece, The dangers of in-game data collection: Can your choices come back to haunt you?:
“…developers…see a chart that outlines how that game’s players are motivated…How do players interact within a game? What choices do they make? That information can be used to make better games. And it can also be combined with other types of information to build robust personal profiles. Those personal profiles are typically used to target advertising, but privacy experts warn that, in the future, that information may be used in sinister ways we can’t expect.”
I Dare Say: That Future is Upon Us
In order for my son to play Fortnite with his friends, and stream and chat with them during the games, we have to have Xbox Live. Of course there is a fee for Xbox Live, so the Xbox account stores credit card information as well as administrative user and demographic information. Both systems – the game (app) and Xbox (platform) – are collecting, refining and structuring the data associated with every single digital interaction you have.
Now consider we combine the two data sets: the intimate psychological profile from the game and the PII from the Xbox account. This isn’t hard: They are both on Microsoft servers.
What might be the implications for my son? I reckon we’d have a pretty robust profile of my minor son and, as arrogant humans, we’d probably think we had him – and his entire demographic – all figured out.
Now zoom out, and realize that they (Big Tech) have this data – at least the raw, unstructured data – for all of us, down to the keystroke level. Much of the world’s data is now residing in the cloud, which means it is stored and carefully protected in extremely secure data centers, around the world, owned and operated by cloud service providers (Amazon Web Services, Google Cloud Platform, Microsoft Azure, etc.).
And it’s not just video games. The same data mining, distillation, structuring and analytics protocols are happening for every single online conversation, transaction, social media post, and other online behaviors – across every industry, in every nation, across the globe.
Now, in theory, this data is secure. The game data and the Xbox data are stored in different systems on different servers, with uber-secure access protocols and, of course, the actual pieces of data are broken up and stored in different locations around the world. But they still have the data. They own the servers. And as we saw in 2018’s Microsoft vs. The United States, and the subsequent Cloud Act, these companies can produce and provide this data to the US Government (and the governments of our 60+ intelligence sharing countries) whenever compelled.
Now Use Your Imagination
Since they have the data, they can, in theory, decide to analyze the macro data for both the game and the account. Even if they strip the PII – as the law requires – but keep the anonymized demographic information, they could still visualize that data at every level: global, nation, state, county, city…individual.
And we know they can unmask masked data.
Who would want to access that data? Who ensures the data hosts are not sharing or selling it? What oversight exists to prevent these massively powerful companies from mismanaging this data? Consider: The who, how and why of mining, distilling, aggregating, analyzing and visualizing our data…would we ever even know? (Spoiler: We would not.)
In reality, we rely almost entirely on rapidly-evolving threat management and cyber security practices to prevent unauthorized access and nefarious activity. Yet, while our security capabilities are maturing and getting stronger every day, so too are bad actors’ hackery and social engineering shenanigans.
Ok, What About Security?
With modern, cloud-based technologies, security responsibilities are shared. To oversimplify for the sake of discussion, businesses secure their apps and data, and CSPs secure the data centers and certain elements of the infrastructure. The other security responsibilities are both AWS’ and the customer’s. There are also service-specific nuances, as security responsibilities can shift and vary based on which services you are using from your CSP. Very complex!
Further, as businesses increasingly adopt, “everything-as-a-service” models, new tools and vendors are introduced; for example, tools for data storage, like Snowflake, data distillation, like Hadoop, and analytics and visualization, like Tableau and Power BI, are all common elements of a modern IT environment. As a company blueprints for cloud evolve and increase in complexity, clarity in security responsibilities becomes critical.
And therein lies a high potential for human error.
Of course, the recent Twitter hack comes to mind. But do you remember the 2019 CapitalOne hack?
Quick refresher: In July 2019, 33 year-old Paige Thompson, a former AWS engineer, exploited a vulnerability on CapitalOne’s side of the cloud shared security model. This means that CapitalOne engineers made a mistake when setting up the security that they were responsible for; in this case, a misconfigured firewall. That mistake gave Thompson access to CapitalOne’s data, which she then stole and shared openly, impacting the identity- and financial-health of 100+ million Americans, at least six million Canadians, and many others around the world.
CSPs are targeted by bad actors all the time – but they’ve never been successfully hacked. As far as I can tell from my research, none of the major CSPs (AWS, GCP, Azure) have ever been successfully hacked. These companies are at the forefront of technology innovation. They have the best security engineers, innovative operating models and insurgent mindsets money can buy.
Cloud customers, however, get hacked all the time. Paige Thompson went on a hacking spree this time last year, successfully breaching 30+ companies in addition to CapitalOne.
Bottom line: There’s a healthy dose of the “trust system” involved when it comes to cloud security. And given the volumes of data we’ve talked about, and their inherent ability to disclose intimate details and profiles about each and every one of us and our behavior, it is incumbent on every human to be aware, stay informed and never stop asking questions.
Is This for Real?
I am not alleging that the cloud providers are misusing our data and playing God. I am merely asserting that they can. And asking the reader to consider the implications.
You don’t need to “learn to code” to understand what Cloud, Data & Security may mean to future Americans and our brothers and sisters around the globe. Just use your imagination.
Imagine you had all this data. How would it change your worldview? Imagine you could visualize and draw insights from world-wide demographics, in real time. You could see the patterns and trends – socially experiment with the behaviors and experiences of different demographic groups to drive different behaviors. You would see success and failure with those experiments. Adjust, experiment, study, repeat.
Imagine all this data coming to life on some cool, futuristic dashboard – that probably exists but most of us haven’t seen. (I imagine the game-maker’s board in The Hunger Games or the control room in WestWorld.)
Imagine you have the capability to intimately view humanity on a global scale, viewing and interacting, in real time, with any slice of the data you want. Imagine running predictive modelling on data at that scale.
Would you not develop a “God complex?” What I know about human nature and its hunger for power tells me, undoubtedly, yes. We all would. No one should have that much power.
Back to video games, this is already an uncomfortable topic for well-meaning game developers. In Stafford’s piece, he spoke with game designer Sam Barlow, who said: “I was capturing all this data and then analyzing it later, and it honestly felt like [I was] spying on someone…It makes you think twice about what information you’re collecting.”
Stafford also quoted Jay Stanley, a senior policy analyst at the American Civil Liberties Union, as saying, “Sometimes infrastructure of data collection is built up for one purpose…then people start to think of other uses.” Indeed.
So, There’s No Hope? That’s Depressing.
If we continue with the status quo, where these companies have all the power and no accountability or oversight, then yes. I fear that hope is lost. As I mentioned, Big Tech has the most talented and innovative minds and skill sets (coming out of communist university brainwashing) that money can buy. And money’s no object. In 2019:
- The big data and analytics revenues were $49B USD and projected to grow to $274B USD by 2022.
- The video game industry generated $139B USD and is projected to be ~$300B by 2025.
- The cloud computing industry was $265B USD, projecting to grow to $927B by 2027.
We are talking about a combined industry sizing (for just those three) of close to $1.5T USD in just a few years! And many would argue those numbers are conservative, especially considering that those numbers are pre-covid!
The money is flowing freely into technology companies now and, since Covid has forced work, school, church – really LIFE – online, we are generating more data than ever before.
Fun fact: in 2019, the WHO confirmed gaming addiction as a sign of mental disorder, and issued warnings and guidance. Earlier this year , however, the WHO pleaded with people to stay home, plug in and play games.
What Can We Do?
More data, means more data innovation and new solutions, including those around security. But until technology users better understand the implications of their behaviors with technology, the world we are building will be informed by the few instead of the many.
As a start, I would propose a Constitutional Amendment, a Data Bill of Rights for The American People. This would need to have teeth, restricting the infringement of our rights by both technology companies and the US Government. It would need to be enforceable with the full weight and measure of the United States Government. We’d need to define true enforcement mechanisms, ramp up our industry watch dogs and even then, it isn’t enough – it’s only the start.
As I said in the beginning, we may never be able to catch up. But we must try. We must demand change: We the People, in America and around the world, need more of a voice in what is happening with our data. We need more industry oversight, increased transparency, and crystal-clear accountability.
In closing, to answer Dan’s question: “How much power do these people have?”
As my middle child used to say, “All the much.”