Published by Guests, on 15/08/2020
When you’re representing a point of view that hasn’t been widely propagated in a given debate, it’s only natural to encounter skepticism and resistance. When that point of view challenges “conventional wisdom” around creativity, ingenuity, and human progress, the criticism is fast and furious.
So when I posit that the best way for Silicon Valley to address its problems is through a process that industrial sociologists call “professionalization,” it comes as no surprise that I am more often than not met with a mixture of befuddlement and derision.
Professionalization is the step-by-step process by which a trade or vocation — a wild west comprised of freelance practitioners — becomes a profession, with universal and, crucially, enforceable standards and norms. An industry, pre-professionalization, has no uniform standards, no formalized body of knowledge, and, therefore, no ability to sanction or eject a malpractitioner from its ranks.
Imagine if you will an unqualified frontier “surgeon” in an old western, setting up shop because there are no other medical facilities for hundreds of miles. He can do as he pleases, regardless of how his actions impact the community, and as a result he could potentially cause a lot of harm. This will continue until, slowly, new institutions are built to ensure the public’s safety and control their industry’s quality. Hippocratic oaths are developed. Norms are created, standards are formulated, and universal expectations are set. This process transforms practitioners from reviled rogue operators in an anything-goes setting to bona fide pros — esteemed, expert, accountable professionals. If you don’t meet their standards, you can be disqualified. You can be shut out by your peers. The frontier “surgeon” must gain accreditation or be replaced by a doctor from an accredited institution who knows the risks of malpractice.
In a professionalized system, you’re either forced to take accountability and abide by standards, or you’re removed from the community. And the public is healthier and safer for it. Professionalization happens all the time in different industries. It just hasn’t happened in tech yet.
We know something is very wrong in the world of tech without having quite the right language to describe it, and every day seems to provide a different case for what ails Silicon Valley, along with ideas about how to heal it. But there’s one retort I get so universally from people with power in the world of tech (especially after they make it clear they’re sympathetic to the broad argument that tech needs to be fixed) that I want to spend some time exploring it here:
“But what about innovation?” they ask, enunciating the word as though uncertain I’d ever heard or considered it before.
They continue: “Yes, the industry is admittedly a mess, and yes, it’s certainly wreaking havoc on our social, political, and economic systems — and, yes, it’s not even good for the industry itself to operate in a wild west. Yes, yes, yes — I’ll grant you that. But what about innovation? Won’t professionalization kill innovation?”
To tackle the easy part first: no. No, professionalization will not kill innovation. Quite the opposite. In fact, this is an example where conventional wisdom has it wrong: Professionalization will actually spur innovation on.
Professionalizing the field of social technology represented by the big four U.S. tech companies (Google, Amazon, Facebook, and Apple) would be a boon to innovation because it is our current tragedy-of-the-commons state that is squashing those who want to do the truly innovative.
Think about the fields that have professionalized in the last 200 years. Take medicine and civil engineering, for instance. Does anyone truly believe that more medical innovation would have existed if we never began certifying doctors? Or if we never put rules in place for the building of bridges?
When I ask why professionalization of the tech industry would limit valuable innovation, I get versions of the following broad, almost axiomatic objection: “Innovation is another word for creativity, and it needs free rein to prosper. It is inherently anti-rule, and anyone looking to establish rules is definitionally going to at least somewhat inhibit innovation.”
In other words, creativity only thrives under a free-for-all set of conditions, without constraints, and said constraints are necessarily obstacles to creative thinking. This is repeated to me with utmost confidence, and, most interestingly, without a single example to support it, as if it is so obviously a priori true that to even give a material instance of its truth would be to show one’s lack of understanding. They absolutely know it’s true.
Which is interesting, because anyone who’s ever created for a living will know immediately that it is absolute nonsense.
When a lot of people ask the same question instinctively, and then give the same stock recitation, rather than providing a concrete example of the principle, it tells me that people might have been socially trained to ask that question. They’re not being antagonistic — they just know they’re supposed to be concerned about innovation, and they’re supposed to mention its potential curtailment as a kind of mantra to those who are ridiculous enough to suggest a meaningful change to how things operate.
“Innovation” has become such a catchall term in the press and among the business class that it’s sometimes hard to know what it means.
So let’s start from first principles: What is this innovation thing that people name-check without seemingly being able to clearly define?
There’s a wide array of popular and academic literature attempting to define and distill innovation as a concept. But almost everyone talking about innovation — for the last 300 years at least — is talking about the same thing.
Innovation, at its most basic, is the use of creativity to solve problems — i.e., creative problem-solving.
If you’re solving a problem in a creative fashion, you’re innovating. If your creative solution doesn’t work or makes the problem worse, or if you’re solving a problem in the standard way, then you’re not innovating. It’s really as simple as that.
And here’s what innovation is not: the elimination of friction for its own sake (“friction,” in this case, means anything that provokes more effort from the user of a product or service — i.e., anything that makes a product more difficult to use). Certainly not if the elimination of friction makes the original problem worse, or creates problems more intractable than the one you set out to solve.
“Frictionless,” more often than not, means thoughtless.
The rapid evolution of technology made us fall in love with the idea that friction is the enemy, and that easy, unfettered, and unencumbered systems lead to greatness. In the early 2010s — and still, to a large extent, now — the buzzword du jour in Silicon Valley was “frictionless.” “Frictionless,” more often than not, means thoughtless. When you can effortlessly engage in a market or activity, you start to believe “frictionless” activity by companies is, naturally, “better.” But is it? Consider how the pursuit of “frictionless” activity led to the creation of ‘innovative’ financial instruments that were key components of the 2008 economic crash. Just one of many examples where frictionless transactions made matters worse instead of solving a problem.
Six years before the Cambridge Analytica scandal exploded Facebook, Mark Zuckerberg was making the case for “frictionless” sharing. Look where that got us.
I want to be clear here: if your solution doesn’t effectively solve a particular problem, it’s not innovative. It might be rewarded handsomely by the market, but it didn’t innovate. It just moved money around while leaving the underlying problem the same or worse.
Which brings us to my biggest beef with those fierce defenders of innovation.
A year before the Cambridge Analytica scandal broke, the media was in full agreement: the most innovative company in the United States was obviously Facebook.
Reading those accolades now is sometimes surreal. What creative breakthrough had Facebook managed? What great act of innovation?
The answer? Buying stuff.
Looking back at the New York Times’ coverage, this is made quite absurdly explicit:
“[The spouse of Snapchat founder Evan Spiegel] recently told the Times of London that she couldn’t stand Facebook’s behavior. ‘Can they not innovate? Do they have to steal all of my partner’s ideas?’ she asked. ‘When you directly copy someone, that’s not innovation.’ To which I say: Meh. There are lots of different kinds of innovation in the tech industry.”
Except that there aren’t. Acquisition doesn’t equal innovation. Buying ideas is not the same as coming up with them. The innovation coming out of big tech recently is actually the innovation of very small tech outfits that the big guys bought. Those innovations arose from a different set of constraints, from friction that the big guys were not encountering.
There’s a reason all our grand folk tales of technological ingenuity don’t take place on lavish, hundred-acre campuses. They take place in garages. They take place on the sly, with spare time grabbed here and there. It’s because that’s where innovation is made: within constraints, almost by definition. Not in everything-goes-so-long-as-you-have-money-to-make-it wild wests like today’s Silicon Valley.
If we take innovation seriously — the actual solving of problems for consumers or the planet or humanity — then we need to look at the context within which creativity best thrives: when it’s organized and directed. In other words, when it’s constrained.
“The enemy of art,” Orson Welles once opined, “is the absence of limitations.” He knew what he was talking about. There’s a reason why, counterintuitively enough, it’s so much easier to write a beautiful haiku than to feel truly confident in free verse. The starkness of a blank page is deeply discouraging to creativity. Even the most basic structure — five syllables, seven syllables, five syllables — allows for innovation that would never occur to someone trying to create without any template.
None of us wants to be labeled a Luddite. (Even though most of us don’t actually know who the Luddites were.) We’re afraid of being or seeming backwards. Ignorant. Not part of the club. And one of the easiest ways to be seen as not part of the club is to suggest that the people in the top echelons of the system — especially in a system that prides itself on supposed meritocracy — got it wrong.
When we make changes to the structure of an industry, those who were reaping the rewards of its original structure, those in power, those with influence, suddenly find themselves losing some of that power and that is terrifying. Who’s going to take over? What’s the alternative?
It’s a lot easier to call that threat to the order a threat to innovation. But we are not trading innovation for stability. We are trading the current pre-professional, reckless state of the industry for a better source of innovation.
Looking back at that Fast Company Most Innovative Companies list, another name stands out: Theranos. Theranos, which we now know to be one of the biggest business scams in U.S. history, was heralded as a fount of innovation, using language essentially indistinguishable from that used to defend Silicon Valley “innovation” today.
Theranos ensnared innovation-worshipping elites from virtually every corner of high society: former Defense Department officials like Jim Mattis, media magnate Rupert Murdoch, the Walton family, legendary tech venture capitalist Marc Andreesen, and so on.
None of these patrons of innovation seemingly did even the slightest bit of footwork, which would’ve revealed with minimal effort that the “innovation” promised by Theranos had no reality behind it whatsoever. The person who actually did that footwork wasn’t any of the people whose innovation-spotting acumen gets trumpeted by Fast Company or the Times. It was, in fact, a young medical researcher, fresh out of college.
And what was the superpower that allowed her to assess whether any actual innovation was occurring? It was, quite literally, professionalization — having universally applied professional standards and norms, precisely the thing the pro-professionalization camp is trying to bring to tech. To quote from the whistleblower, Erika Cheung, herself:
“I think [for] anyone who goes into the healthcare industry, you take [the] oath and you set certain standards for yourself… Largely that is to protect patients… In my case I was testing patients with technology I didn’t trust myself. I wouldn’t test it on my family members, [and] then being expected to give those to the public that was a line I wasn’t willing to cross.”
The medical field is obviously incredibly far from perfect; the debates raging across the presidential campaign give testament to the deep frustrations felt by millions of Americans about their health care. It has problems that professionalization alone hasn’t solved.
It’s also, to state the obvious, considerably better than it was pre-professionalization, where your moonshine-swilling doctor required no certification, could not be commended or punished by their peers, and no one could risk investing in medical cures or innovations because it’d be so much more expensive than just selling snake oil and heading out of town.
I don’t think those falling back on a vague, undefined vision of innovation as an objection to professionalizing tech would ever support keeping the medical field in the 19th century. So why do they want to keep tech there?
Professionalization isn’t a utopian vision — it’s happened before in fields ranging from medicine to law to civil engineering. It’s a first step. An act of triage, if you will, on a tech industry that will continue bleeding out, with debilitating scandal after scandal, if we don’t move to professionalization.
It takes no great moral courage or genius to realize that seeing a doctor in 2019 is preferable to seeing a doctor in 1875 — and no one would take seriously an argument suggesting the fact we’ve moved on since the 1870s is a reason we should be satisfied with our current medical system. There will still be immense problems to face within tech, as there are still within health care, after we professionalize.
But we can’t even begin to face those problems before we professionalize. You need to begin certifying doctors and establishing medical boards long before you reach the point where you can debate Medicare For All. But we got there. If Silicon Valley had undergone the same professionalization process as medicine, law, and civil engineering, maybe they wouldn’t have squandered billions on Theranos before medical researchers blew the whistle.
Those people I mentioned who trumpeted Theranos for years because of its “innovation” — those in the VC world, key voices in tech journalism, non-tech billionaires dipping in their toes — are precisely the people who most originate the “innovation” talking point. Let’s hold off on handing them credibility they haven’t remotely earned by rotely repeating their talking points on “threats to innovation.”
Let’s attack the real threat to real problem-solving: a pre-professionalization free-for-all status quo that makes it nearly impossible for actual creativity to thrive. Let’s stop keeping things the same in the name of innovation.
(This article was originally published on marker.medium.com)
Your email address will not be published. Required fields are marked*