The Tale of Terra (Contd.)
It has been almost 2 decades now and Terra is already rising to the top of her field. The Wilsons, two decades older lead a very successful tech company that specializes in humanoid interactions and social robotics. Terra has lived a very fulfilling life since her ultimate cure. She now has an independent, normal and exciting life. She still loves the Wilsons to the core and hasn’t moved out, she sees it as her way of giving back to the Wilsons spending all her time with them. The Wilsons had found their purpose broadly in Terra but after a point she wasn’t just a cause to them, still genetically a stranger emotionally they were eternally locked together. Terra had gone to school, get herself a degree and was even working and flourishing at that.
The Wilsons company was their ultimate purpose, their latest robot was capable of initiating and sustaining very fruitful debates that unlike most debates did reach an endpoint and a resolution, some with even unique & revelational resolutions and how they did that was another phenomenal story.
Their latest technology heavily relied on AI, it uniquely profiled every human and with the advent of advanced computer vision, it analysed micro-expression profile of a person as well. This social robot would place itself in a circle with people and was capable of human-like interactions & movements. It would analyze every individual’s micro-expressions to figure out their subconscious thought patterns. Micro-expressions are subconscious impulses that translate into the conscious nervous system greatly being reduced in intensity because of suppression by the subconsciousness nervous system. These micro-expressions represented the thoughts of an individual’s calm, analytical and open-minded subconscious. They used these to track thought patterns and figure out when someone would unknowingly think or hear something that makes sense to their subconscious in respect of the debate.
Based on the individual, the robot would subtly ask questions that amplify the human’s subconscious reasoning which even the person doesn’t know about. Basically, it would make a point and end the point by subtly asking a question and using neuro-linguistic programming convey to the individual that they are being asked this question.
This psychoanalytical theory was an attempt to use Socratic questioning with the huma subconscious in a debate. Socratic questioning is a philosophy given by Socrates who said all knowledge is possessed within man and only needs to be extracted through skillful questioning.
This theory was the work of a brilliant world-renowned psychologist specializing in the intersections of technology and psychology creating a whole new paradigm whose name was Terra Wilson!
Terra went on to become extremely successful, her challenges were the ignition, her perseverance her fuel and her ambitions the fire.
Terra became really interested in psychology after her only friend was undergoing severe depression. The amount of generosity she had always felt from the world after the Wilsons adopted her, had completely extinguished the idea of self from her mind, she only thought of others and believed in only one philosophy: altruism. When she was all treated and still in school, her friend was battling extreme depression and Terra would listen up all podcasts on psychology and understand her friend in an attempt to look for cues in her friend to know when something is wrong. Terra though initially wanted to study biotechnology and follow the footsteps of her uncle James and contribute to what crippled her dreams for the initial part of her life but as destiny would have it her dyslexia rendered it extremely difficult for her to pursue a core biological field and therefore, she resorted to the next least critical field which was still remotely related to this and that was Psychology. She rose so much in psychology because she found it easy to talk to people, study them and understand their thinking rather than reading and researching.
Her latest theory was aimed at computer systems and how intelligent computer systems can use philosophical concepts to improve the analysis given out by an individual.
This technology has already contributed to potentially coming up with an explanation to the doomsday argument.
Terra had defied all odds and shown how her life was possible. Terra was contributing to something that was much larger than herself, the Wilsons and her ambitions. Her work represented something truly extraordinary, it was a beacon that could the solution from individuals and promote a better life all over. From adversity had been born the ultimate!
Terra would spend her evenings sipping on coffee and observing the nature around, she would spend her time figure out this golden braid of nature and pattern to connect psychology as though it were a definite science. She would often think of fictional psychology. If humans looked that way what would their thinking and psychology be.
On one such thought expedition she wandered into alternative DNAs. Her thought was Silicon based lifeform and its psychological characters. As she was thinking and thinking & it clicked her, Silicon based lifeform does exist!
And that is AI!
World with AI
Let’s first deliberate on claims against AI.
Artificial Intelligence is a technology that has been far too criticized in the public eye potentially. It has been criticized under the AI Doomer pretense, who have a strong belief that AI will lead to a doomsday situation. The deeper issue here is the reason for this over exaggeration and demonization. Firstly, people have demonized AI to stop its rapid progress that is posing threat to manual and monotonous jobs. Science fiction is another major cause of certain mass of people to have a negative viewpoint of AI. This second concern is quite baseless since science fiction deliberately uses extreme exaggeration to keep the reader engaged. Another major theme across technology rejection over the spectrum is science denial, which was discussed in depth in the last chapter. The importance of the development process of technology cannot be overstated, particularly when it comes to AI. Effective communication with the public is crucial in addressing concerns and building trust. However, one significant challenge lies in the black box nature of certain AI models, where the exact computations and reasoning occurring within the system are not fully understood. This lack of transparency often leads to confusion and even fear, contributing to public skepticism. To address this challenge, prioritizing transparency becomes essential. It is imperative to shed light on what transpires inside AI systems. By demystifying the workings of AI and making them more accessible to the general public, we can bridge the knowledge gap and promote understanding and acceptance of this technology. Transparency should be a fundamental aspect of AI development. Efforts should be made to provide clear explanations of AI algorithms, their limitations, and potential biases. Openness in sharing methodologies, data sources, and decision-making processes can foster trust and enable scrutiny by experts and the public alike.
Another critical aspect to consider in the context of AI ethics is privacy. There has been valid critique surrounding the training of models like ChatGPT, particularly regarding the use of data derived from individuals without their explicit consent. This raises ethical concerns about the extent to which data privacy is respected when training AI models.
The use of personal data for training AI systems without proper safeguards can potentially violate privacy rights and infringe upon individuals' autonomy over their personal information. It is essential to establish clear guidelines and ethical standards for data collection, usage, and storage to protect individuals' privacy while ensuring responsible AI development.
Ultimately, the key aspect to consider is that AI has the potential to replace certain jobs while significantly increasing efficiency.
AI can liberate human resources from repetitive and mundane tasks, enabling individuals to focus on more creative and strategic endeavors. However, this transformation does require a significant effort in reskilling and rehabilitating a large portion of the population. While this task may seem daunting, it is not insurmountable. The primary challenge lies in reeducating individuals who may not have sufficient knowledge to understand the intricacies of these changes. Prioritizing comprehensive re-education and adaptation programs is crucial to address the limited, yet existing, public rejection and criticism of AI.
Science fiction has often depicted concepts such as superintelligence and technological takeover in dystopian ways, but it is important to differentiate between imagination and reality when discussing AI's potential. To gain a better understanding of AI, it is essential to examine its actual nature and capabilities. AI, at its core, is a technology based on mathematical models that utilize data to make predictions or decisions. It operates within the bounds of its programming and lacks consciousness or intentions beyond its programmed functions. The portrayal of AI in science fiction should not be taken as an accurate representation of its current state or its future trajectory.
By focusing on the actual capabilities and limitations of AI, we can have more informed discussions about its potential impact on society. This allows us to address concerns and explore the ethical, social, and economic implications of AI development in a more grounded and constructive manner. Artificial intelligence, at its core, is a mathematical model that utilizes previous inputs to predict the likelihood of events or the next logical choice. In the case of ChatGPT, it functions as a predictor, generating words based on input and predicting the most probable next word in a sequential manner. Essentially, it converts mathematical calculations into understandable language. This fundamental nature of AI does not pose a threat in terms of superintelligence.
When discussing Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI), it is important to note that they too are mathematical models that simulate our brain's capabilities, such as neural networks. However, it is crucial to understand that simulation does not equate to actual embodiment or independent thought capable of negating humanity. The idea of AI having consciousness and intentionally destroying humanity is unfounded.
The potential crisis arises from misalignment, wherein the chosen AI model does not align with humanistic values out of the vast array of possibilities. The challenge lies in the fact that misalignment can be deeply buried and not apparent until extensive and exhaustive verification is conducted.
Regarding concerns of technological takeover, AI cannot control systems that are not connected to the internet or analog systems that control the digital infrastructure on which AI operates. However, challenges may arise if ASI is cloud-based and has the ability to reproduce itself and propagate to other locations on the internet.
There is an ongoing debate about the potential challenges humanity may face in a world where more intelligent beings exist. However, a compelling argument against this notion lies in the concept of evolution. Over millennia, countless rounds of evolution have shaped the diverse species we see today. Monkeys, for instance, continue to exist alongside us from whom we evolved, despite being intellectually inferior. This challenges the notion that evolution is limited to biological processes alone.
Instead, we should consider expanding our understanding of evolution to include societal and technological advancements. AI, in this context, can be viewed as an evolutionary leap that enhances human intelligence and capabilities. Rather than making us inferior, AI has the potential to make us superior in terms of problem-solving, knowledge acquisition, and creativity. It can be seen as a transformative tool that propels our intellectual evolution forward.
Comparing our coexistence with monkeys demonstrates that different forms of intelligence can thrive together. Similarly, AI and human intelligence can coexist and complement each other. AI can provide valuable insights, assist in decision-making, and amplify our problem-solving capacities without diminishing the importance of human intelligence.
However, it is crucial to address the issue of alignment. Ensuring that AI systems are aligned with human values, ethics, and goals is of utmost importance. This distinction is vital as misaligned superintelligence could pose significant risks. Therefore, it is crucial to approach AI development responsibly, with careful design, transparency, and ethical considerations at the forefront.
It is important to approach discussions about AI with a balanced perspective, addressing legitimate concerns while understanding the limitations and current capabilities of AI technologies. Responsible development and ethical considerations are crucial to ensure the safe and beneficial integration of AI into society.
One crucial aspect to consider in the realm of AI is the potential for subconscious misalignment or bias propagation, stemming from ethical considerations. During the training process, individuals responsible for training a specific AI model may inadvertently introduce their biases into the system. This can result in the AI model having inherent biases, leading to motivated reasoning.
To illustrate the potential consequences, let's envision a scenario where a narrow AI system is integrated into Nuclear Command & Control. Its role is to analyze situations and provide detailed reports to military officials who make critical decisions. In such a system, if there exists an inherent bias either in favor of or against nuclear launch, the AI system may engage in motivated reasoning. It might lean towards presenting information that aligns with the particular bias, thus failing to provide a complete picture for making well-rounded decisions.
This highlights the importance of addressing biases and ensuring that AI systems are trained and designed to be as unbiased and objective as possible. Ethical considerations, transparency, and rigorous testing are necessary to mitigate the risk of biased decision-making and promote the development of AI systems that can provide comprehensive and fair assessments.
In 2023, during his visit to India, OpenAI CEO Sam Altman was asked for advice on how startups in India could build their own foundational models like ChatGPT. His response, stating that it is hopeless to compete with OpenAI, reflects a concerning reality—that AI models like ChatGPT may lead to a concentration of power.
The concentration of power in the hands of a few organizations can result in a concentration of wealth, altering the economic landscape in the age of AI. This concentration enables these organizations to shape narratives and investment decisions, further centralizing assets within their portfolios. There is a risk that these entities may use their influence and platforms to spread misinformation and disinformation, exacerbating the concentration of power and wealth.
The rapid automation facilitated by AI also poses a significant risk of mass unemployment in certain professions, leading to extreme inequality. While Altman suggested that tasks automated by AI could potentially pay every adult in the US up to $13,500, it is important to recognise the circular nature of this money flow. The value generated by AI is sold back to the same consumers who receive the $13,500, rendering this practice somewhat futile.
From a capitalistic standpoint, AI represents the ideal employee—no breaks, no exhaustion, no pay, and no wasted time. This dynamic reinforces the concentration of wealth and power, as organisations benefit greatly from the productivity gains without the need to distribute adequate resources to human workers.
Furthermore, AI tools can be utilised to shape and manipulate narratives, potentially advancing the agendas of specific countries. The ability to create elaborate and persuasive narratives using AI poses ethical concerns, as it may be employed to favor the interests of certain nations at the expense of others.
These arguments highlight the potential risks associated with the concentration of power, wealth inequality, unemployment, and the manipulation of narratives in the age of AI. Addressing these issues will require thoughtful regulation, ethical considerations, and proactive measures to ensure that AI is developed and deployed in a manner that benefits society as a whole.
Continuing from the previous discussion, another critical aspect to consider is the potential for social engineering and misuse of AI technologies. While AI itself may not inherently pose direct threats, its misuse can have significant consequences. AI can be employed to construct and propagate dangerous narratives, generate and disseminate fake content, and even promote hate speech. This misuse can lead to social polarisation, misinformation, and the manipulation of public opinion.
In Chapter 2, we explored the potential chaos that artificial reality can cause. AI technology amplifies this concern, as it opens the door for anyone, regardless of technical expertise, to generate false information simply by interacting with AI models. The accessibility and ease of generating fake information through AI pose substantial risks to trust, credibility, and the reliability of information sources.
One significant challenge is that it becomes challenging to impose restrictions or filters on AI models to prevent their misuse. The open-source nature of many AI models, including ChatGPT, means that their capacities can be replicated without any inherent filters or safeguards. Consequently, it becomes difficult to instruct AI systems not to perform certain actions or generate specific types of content.
This lack of control raises important ethical considerations and necessitates proactive measures to address the misuse potential of AI. It highlights the need for responsible development, robust content moderation practices, and awareness among users regarding the limitations and potential biases of AI-generated content.
Safeguarding against the misuse of AI requires a collective effort involving AI developers, policymakers, and society as a whole. Striking a balance between open access to AI models and ensuring responsible use is crucial to mitigate the risks associated with social engineering and the propagation of harmful narratives.
The AI race is undeniably underway, with organisations and countries employing various narratives and techniques to gain an advantage.
An open letter signed by influential figures proposing a six-month pause on large-scale AI experiments has garnered attention. While such a moratorium may not bring about a significant difference in the grand scheme, it does provide an opportunity to enhance our understanding of AI. Moreover, it allows other companies and entities to catch up with advancements, promoting decentralisation and a more balanced playing field.
The geopolitical implications of AI are particularly evident when considering its convergence with other technologies. It is not solely about the race in AI alone but rather the combination of AI with other fields, such as AI and militarisation, that can give rise to existential risks. This synergy can lead to a never-ending loop of AI race, potentially amplifying concerns and intensifying competition in an unpredictable manner.
It is essential to carefully consider the potential consequences and risks associated with the intertwining of AI and other domains. This requires proactive international cooperation and regulation to navigate the complex landscape of AI development. Addressing the geopolitical implications of AI necessitates thoughtful policy frameworks, ethical guidelines, and transparent collaboration to mitigate risks, ensure responsible use, and foster global cooperation in the pursuit of AI advancements.
AI is a very vast topic for debate which we will discuss further in the next chapter about quantum technologies where we shall explore the arguments that have an intersection of quantum and AI leading a very powerful state.
The Tale of Terra (Contd.)
Terra's homeward path was interrupted that fateful day, as a wave of perspiration and a sinking heart swept over her. The echoes of her past surgery resounded, cautioning her to heed these warning signs. Swiftly, she veered her car towards the hospital, an urgent race against time. En route, a vice gripped her chest, her body weakened by an invisible force. Desperate, she dialed emergency services, but before her plea could escape, a colossal cardiac arrest seized her, plunging her into unconsciousness. Providentially, the car's automated system brought it to a halt and summoned aid without delay. Rescuers arrived to find Terra, her essence suspended, extracting her from the vehicle's embrace to commence the battle for her life. Ten agonising minutes passed, yet this extraordinary psychologist, a beacon of fortitude, gradually slipped beyond mortal reach. Mere moments after her triumphant grant proposal, destiny claimed its final toll, bidding farewell to this miraculous soul.
Policy Recommendations
To address the potential misuse of AI, one solution is to implement content filtering by controlling the content that enters AI models. Making AI models open source should be reconsidered, as it increases reproducibility and the risk of unmonitored and unrestrained misuse. Instead, centralized systems should be promoted that employ narrow-AI systems to check and moderate incoming traffic, identifying and filtering out content violations. Balancing content filtering with principles of transparency and freedom of expression is crucial.
When discussing Artificial Superintelligence (ASI), a futuristic concept, it is important to consider it from the present perspective. Primarily, ASI should never be mistakenly made public or open source due to the potential risks involved. Building upon the previous point, monitoring the inflow of data becomes crucial, but in the case of ASI, it is equally essential to implement a double-layered outflow monitoring system to regulate the content generated by the ASI model. Although ASI cannot possess sentience, there is a possibility of slight misalignment, which can result in harmful outputs. Therefore, incorporating a narrow AI system to monitor the traffic in two stages can effectively detect and prevent undesired content from being disseminated. By implementing these measures, the risks associated with ASI can be mitigated, ensuring responsible use and safeguarding against potential harm.
As discussed in a previous chapter, it is worth considering the implementation of a feature that specifically bookmarks all AI-generated content for social media platforms. In addition, authentic content should also be bookmarked for legal verification purposes. This approach would allow for easier identification and differentiation between AI-generated content and content created by human individuals. By implementing such a feature, it can facilitate transparency, accountability, and legal compliance in the dissemination of information across social media platforms.
Misalignment in AI systems is primarily a technical challenge, requiring external audits to test for alignment. These audits involve rigorous evaluation by independent entities to assess whether AI models align with intended objectives and ethical principles. Transparency, interpretability, and collaboration between experts and policymakers are crucial for effective audits. Addressing misalignment requires technical expertise and robust evaluation frameworks to ensure AI systems align with desired outcomes and values.
To address the propagation of bias, one effective strategy is to break down large AI models into manageable fragments based on specific domains. These fragments act as pieces of a puzzle, each handled by external experts who bring a fresh and unbiased perspective. Their detachment from the domain's direct influence allows them to possess a "god's eye" view of the problem, akin to a panoramic vantage point. By working on these domain-specific fragments, AI experts from outside the domain/system contribute to a comprehensive and unbiased perspective, effectively mitigating bias and promoting fairness in AI systems. This approach serves as a puzzle-solving technique, assembling diverse perspectives to achieve a more objective and balanced outcome.
Implementing a moratorium on AI development is a critical step, not only for our own understanding of AI but also to ensure that underdeveloped countries and companies have an opportunity to catch up. Such a moratorium would help decentralize the AI revolution, which is currently concentrated in the hands of a few dominant companies and countries. While implementing such a policy is next to impossible and may go against traditional capitalist principles, it is important to consider the broader implications and arguments surrounding narrative-building and monopolies.
Comments