There have been Kondratiev waves throughout history, commonly referred to as innovation waves, including the invention of electricity, the printing press, and the steam engine. All of these technologies spurred a paradigm shift which resulted in transforming the way the world operated. Today, many believe AI is the next Kondratiev wave and that it will be responsible for transforming how businesses create value, how people work, and ultimately how people live.
For businesses to survive the era of AI, they must prepare to abandon legacy technology and invest in new ways of doing things, sometimes reasonably quickly in order to stay relevant. This phenomenon is called the “burning platform” effect, based on the idea that in order to stay competitive, businesses must adopt a radical change strategy as if their current way of doing things was on fire.
While it is essential to adopt new technologies with a sense of urgency, a popular argument against the burning platform effect is that creating a sense of fear to accomplish change is counterintuitive. A Forbes article states that fear leads people to be more conservative — taking fewer risks and preserving the status quo. In order to successfully lead a culture change, leaders must ensure that their teams feel safe enough to be actively involved with that transformation.
True acceptance of organizational transformation comes from its employees’ willingness to innovate. Innovation relies on people being able to try things and fail without judgment or punishment. Ensuring that there is a shared belief that everyone on the team is safe to take risks (including implementing ideas that have a high chance of failing), results in more long term success called the “Prototyping Effect”, a popular method in engineering that companies like Google implement instead of the burning platform ideology.
The Prototyping Effect is shown below and explained by Adrian Otto, Technical Director at Google, “In our experience, the best way to achieve more successes and build more confidence in your culture is to increase the number of ideas you try. At Google, we do this by promoting small incremental changes and aiming to improve with each iteration. Our learnings primarily come from the “ideas tried” category, so the more active we are there, the more quickly we learn and refine our actions to drive successes.”
To understand the importance of the AI Kondratiev wave and how to succeed with a fail fast, fail often approach, here are 10 top takeaways compiled from the “Harvard Business Review of Artificial Intelligence” book.
1) The most important general-purpose technology in our era is artificial intelligence.
The term artificial intelligence was coined in 1955 by John McCarthy, a math professor at Dartmouth College. Although AI has made significant advancements over the past 66 years, the most substantial improvement has not come from even the past ten years, but just in the last five years since 2016.
Executives see it as a disrupter; employees view it as a job destroyer, and consultants pitch it as a “cure-all.” However, AI is the most crucial technological advancement in our era. According to McKinsey, it will contribute as much as 13 trillion to the GDP by 2030, making it the most prominent commercial opportunity in today’s fast-changing economy.
2) The different flavors of AI.
Artificial Intelligence (AI): a broad field or term that describes any machine capable of acting autonomously.
Machine Learning (ML): a brand of AI where computers learn from data without using complex rules. ML is a way to use standard algorithms or models to analyze data to drive predictive insights and repeated decisions at scale. It teaches a computer how to solve problems by feeding it examples of the correct answers.
Deep Learning (DL): is a type of machine learning that imitates the way humans gain certain types of knowledge. While traditional machine learning algorithms are linear, deep learning algorithms are stacked in increasing complexity and abstraction hierarchy.
Reinforcement Learning (RL): an advanced ML technique that enables models to learn complex behaviors without labeled training data and then make short-term decisions while optimizing for longer-term goals. A favorite example of RL is the AWS DeepRacer, a fully autonomous miniature race car driven by reinforcement learning that even has a global racing league!
3) AI, in its most basic form, is a glorified Excel formula.
AI can be daunting to technical people, let alone people who are not technical. In an interview with Joaquin Candela, the AI Lead at Facebook, he simplifies AI and breaks it down into a simple and understandable explanation to anyone familiar with Excel. He states that a machine learning algorithm is essentially a lookup table of key inputs and their values. For example, an algorithm is given images of horses and images that are not horses. These images are then labeled as such, and the algorithm keeps the answers in a table. If a new image or example comes along, the algorithm goes and looks at previous examples to determine if the new image is in fact a horse or not.
There is much more to it than that, but Joaquin’s definition makes it sound realistic, almost tangible, and therefore attainable.
4) Non Technical employees should know how AI works.
A strong AI strategy is about embedding it into the very DNA of a company’s culture, and a positive culture is about employees feeling safe enough to innovate. A strong AI strategy also considers the benefits of having non-technical people understand AI concepts and its use cases. Machine learning is already in everything they touch. It is in PowerPoint with its design recommendations, and it is in their expense program like Concur categorizing receipts. It is in Gmail when it makes suggestions on how to complete their sentences.
A broader understanding of AI and how it currently plays a role in people’s lives can lead to many different business units, not just engineers, developing ideas of how AI could solve fundamental business problems. By increasing their productivity with automation or improving customer satisfaction with prediction models. According to Emma Martinho-Truswell, there are three questions that non-technical employees should be able to answer about AI; how does it work, what is it good at, and what should it never do?
Just because ML can solve a problem does not mean it should. We can train AI to discover human emotions, but should we equip managers with smart glasses to detect the emotions of their employees during every meeting? Employees should understand the ethical limitations of AI. An excellent way to discover ethical limitations is by judging how comfortable people would be with AI intervening in specific scenarios. For example, AI can determine how productive employees are based on their activity on a computer. Should AI be used in this situation to reprimand unproductive employees, or is that an unethical use of AI?
5) The goal of AI should be saving human time and energy, not taking over jobs.
The common fear is that AI will replace human jobs leading to mass unemployment and economic depression. That paints a very dystopian and frightening future, but extensive evidence suggests a more hopeful and utopian future. AI focuses on saving human time and energy, thus leading humans to focus their efforts on passion projects or making the world better for all beings.
The most realistic future will be AI working alongside humans automating various tedious tasks, making humans more efficient in their work, more so than it already is. Executives who want to position their company to thrive in an AI future should set the following goals:
Use technology to augment human skills and reinvent operating models
Take the opportunity to redefine jobs and rethink organizational design
Make employees partners in building the intelligent enterprise
6) Companies that wait to adopt AI may never catch up.
Companies that plan to take the “fast follower” approach to AI may never catch up. Early adopters will take considerable market share, operating at a lower cost and with better performance.
In the Softchoice “2021 State of Cloud-Enabled AI Report,” which surveyed a mixture of C level and individual contributors, it showed that over 60 percent of business leaders know that advanced data analytics and ML will have a transformative impact on their business as early as 2022. Nevertheless, over 40 percent of individual contributors said they lack buy-in from their managers and ML skill sets on their teams.
Exploring managed or professional service partners is an option for businesses that currently lack in-house expertise while building an AI strategy.
7) How AI will change strategy.
The authors of the Prediction Machines: The Simple Economics of Artificial Intelligence, stated that their most commonly asked question by executives is “How AI Will Change Strategy”. While we cannot necessarily say for sure what AI is capable of, we can base our strategies on its current state, especially around prediction models. AI predictions will become more intelligent as more companies train models on existing data while continuously collecting new data.
Business leaders should be designing their strategy with those current capabilities in mind and then take calculated risks on what near future states of AI are possible.
An example of a strategy with current and future states of AI is Amazon shopping. We are already aware that Amazon currently predicts what products one might be interested in based on their search history and past purchases. Amazon can suggest repurchasing products as well. Last weekend, my Alexa verbally suggested that it could be time to reorder the loose leaf tea I purchase every three months.
The article also poses an interesting question, what if Amazon changed its business strategy from predictive shopping to predictive shipping? For example, what if Amazon decided to ship the tea that I buy every three months? Maybe they also decided to ship tea bags to put the loose leaf tea in as well? Chances are, I would keep both items and accept the charges. Amazon is likely already working on this ship first, shop after predictive model, but are probably still in the stage of researching if consumers would respond favorably or negatively to this feature.
8) Quality data will be more relevant than quantity.
As AI develops, it will rely less on big data trained AI called “bottom-up”, and focus more on real-time data AI called “top-down”. Bottom-up AI is training models to make predictions based on large quantities of data. Top-down AI training requires less of a data dump but more observing and reinforcement learning. For example, the AWS DeepRacers mentioned earlier require real-time and ongoing environment data to get faster at racing the track.
Although the theory of top-down AI was first mentioned in the 1950s, bottom-up AI was more attainable and therefore more popular. Because we have seen such significant development with bottom-up, companies can now start experimenting with top-down, which is exceptionally beneficial for startups who do not have petabytes worth in data lakes like the traditional enterprises.
We will eventually master the predictive models and teach machines common sense by trying to mimic human cognition, but we are not quite there yet. This idea leads us to our next point on the race to build emotionally intelligent AI.
9) The shift from data-driven intelligence quotient (IQ) AI to deep emotional intelligence (EQ) guided AI.
Annette Zimmerman, VP of research at Gartner, wrote “by 2022, your personal device will know more about your emotional state than your own family”. Just two months later, a landmark study from the University of Ohio claimed that its algorithm was now better at detecting emotions than people are.
We can train an AI model to interpret emotions by listening to the tone of someone’s voice or the expressions on their face. We can then train AI how to respond if it detects anger or sadness, but we cannot train it to respond on its own the way humans consciously do, mainly because we cannot explain how we do it ourselves. This is called Polanyi’s Paradox, the theory that human knowledge of how the world functions is, to a large extent, beyond our explicit understanding.
Perhaps the biggest challenge we face in designing emotionally intelligent AI might be finding more emotionally intelligent humans to help build them. It is a common perception that women tend to score slightly higher when testing for emotional intelligence. Regardless if that is true or not, a Kaggle report, “The State of Data Science and Machine Learning 2020”, found that data science is still suffering from a significant gender gap in the workplace; only 16 percent of Data Scientists identify as women.
Building more diverse machine learning teams and including non-technical people could solve the current constraints in developing emotionally intelligent AI.
10) Start small but think smart.
A first-time AI project should give a team a quick win to encourage them to take on a more complex future project. A quick success will also help get immediate buy-in from the ELT team and the rest of the business, resulting in the required funding if needed.
Before starting an AI project, it is essential to consider those that have learned some hard yet crucial lessons. AI can be powerful, but it is also very naïve, young, and fails more often than it succeeds.
Be prepared for failure because a project could go wrong very quickly, depending on the AI project. Take for example when Twitter users trained Microsoft’s new AI chatbot to become extremely inappropriate in less than 24 hours.
Remember, it is OKAY to fail; failure can create new opportunities. Google Glass, originally intended to be the next evolution of the smartphone. While it never took off in the mass market, it became a unique enterprise solution, helping manufacturing employees work smarter, faster, and safer.
In conclusion, the Harvard Business Review on Artificial Intelligence was a fascinating read. When building an AI strategy, essential takeaways are ensuring machine learning teams are diverse and that non-technical employees can be involved in the project. However, more importantly, always be respectful of the privacy of users’ data in training models and consider the ethical limitations of each project.
Ready to tackle an AI project? Andrew Ng, the founder and Head of Google Brain created an “AI Transformation Playbook” to get you started.
Comments