Approximately 65 percent of the world’s population (around 5.6 billion people) are connected to the internet. Besides, 30 billion IoT devices are also constantly connected to the internet. In 2021, they created 79 zettabytes of data. Predictions suggest that in the next three years, more data will be produced than in the past 30 years.
This astonishing rapid growth is primarily driven by digital technological innovation by a handful of Big Tech companies that view the world as their market. They utilize the most advanced influencing technologies to achieve their goals based on the data they obtain. They turn their customers into products in exchange for “free” services.
While the rat race of temptation and manipulation used to be reserved for totalitarian regimes, religious movements, and to some extent, certain democracies, today it is the Big Tech companies that strive to achieve global dominance, now with their latest “toy”: artificial intelligence (AI). How do we protect society against the side effects of these giant tech companies?
At present, it is already clear that AI will take over many office jobs. However, the social and societal consequences of the lack of meaningful occupation for highly educated individuals remain unclear. What will be the impact on public services if governments are unable to collect substantial income tax? How can we democratize data usage? How can we finally leverage the vast amounts of data to make the world a better place?
Data can make the world more beautiful, or uglier. The choice seems simple, but apparently it is not.
AI will bring significant scientific breakthroughs and offer solutions that surpass the human brain. But the question arises as to whether AI is being handled with care when the primary driving force behind its development is of an economic nature. What we are witnessing with the launch of ChatGPT and Bard is that Microsoft and Google are engaged in a rat race, rushing to bring products to the market as quickly as possible. Similar to social media, we will only learn about the consequences later. “The right to digitally disappear” only became a right when it became necessary due to the development of the internet.
None of the Big Tech organizations fully understand how AI learns, what it learns, and when it learns it. Juval Harari suggests that nuclear bombs are to the physical world what AI is to the virtual and symbolic world. There is a danger of “disreality,” where people begin to doubt everything digital. AI calls for a more cautious approach, perhaps even a pause.
The world will undergo significant changes in the next ten years, and the impact of AI on society will be enormous. A handful of Big Tech companies compete, and numerous startups work on and with the digital technology of the future. How will digitization, data, and the arrival of AI influence your life? Will you try to resist it or embrace new digital technology and prepare for a different future? How does your organization relate to emerging technologies? These are just some of the questions that still need to be answered.