This digital transformation is a societal meta-development directly impacting the subconscious mind of organizations not only in structural and process-oriented but also for methodological and technological questions of decision support and automation, and the incorporation of hybrid intelligences. This future follows an obviously inevitable but influenceable development of the cultural evolution of mankind, which can be considered positively with optimism or with eskeptic caution.
These perspectives and the necessary rules regulation, legislation and ethical considerations may help us as a guide and as reference points in this phase of the socio-cultural and social paradigm shift we are in. Together with the guiding principles for shaping the subconscious mind of organizations from the previous chapter the section titles of this chapter about emerging and actively shaped “hybrid intelligencies” may serve as additional guiding principles for building sustainable organizations.
We are now living in a new era of data management, a fundamental transformation in the industry, similar to when virtualization was first adopted 15 years ago. In our hybrid/multi-cloud world, there is not just one answer for managing data. Regardless of the type of data or where it resides, all the diverse data languages and methods of control, the word “data” can encompass a great deal. Emerging technologies including IoT, 5G, AI and ML are generating greater amounts and varied types of data. How we access that data and derive insight from it becomes critical, but we have been limited by people, processes, and technology.
No question today is more existential to the future of AI than whether AI algorithms should merely mirror the human biases of the world today or whether they should ultimately rise above them and how precisely that more “ideal” world might be defined. Should decision-making algorithms encode the world as it stands or the world as it could be? Who defines what that idealized world should look like and what values should it possess and uphold? Should management algorithms display the human empathy of their predecessors that put employees first or should they adhere to the mathematical optimization of maximizing corporate profits at all costs?
Societies have long grappled with the questions of what values, beliefs and priorities should define their collective existences, with many forms of government over human history designed in different ways to help create and enforce such rules. The problem with AI is that it is forcing societies to encode these values into pieces of software written by private companies and accountable to no-one.
In the corporate world, a common source of algorithmic adjustment has been around management and workflow automation. A human manager might take empathy on their employees, adjusting work schedules for those with young children, feeling under the weather or going through a difficult time in their life. Such actions are technically harmful to a company’s bottom line in an era of replaceable human capital and create opportunities for bias as one new parent receives a break while another new parent is sanctioned for not meeting their deadlines.
As algorithms and data-driven decision-making has come to the enterprise, companies are increasingly grappling with the fact that their data and algorithms often recommend choices that may be supported by data but have harmful side effects. Algorithms can often see only part of the problem and thus reason using an incomplete understanding of complex interdependent situations. Even when limited to narrowly defined domains, available training data may be highly biased, requiring considerable investment to mitigate that bias. Most troubling, in many cases biased data merely reflects a biased world, raising questions of who defines the idealized world algorithms should embody.