Prepared based on “Forbes”
Today, AI (artificial intelligence) is our “new race.” While companies race to invest in technologies, employees prepare for the impact. According to a recent Pew study, American workers are more concerned about AI’s impact on their jobs than they are hopeful about its benefits. Space science, like nuclear energy and any other technology, doesn’t have “its own conscience.” Whether it serves good or bad purposes depends on humans, writes Forbes.
According to the World Economic Forum (WEF), two-thirds of employers plan to hire employees with AI skills, but 40% expect automation to reduce the number of jobs. WEF predicts that 90 million jobs will disappear worldwide in the next five years. Whether AI becomes a force for good or evil depends on people. Humanity must remain the primary decision-maker, maintain its moral superiority, and prevent technology from “going too far.” This can only be achieved by strengthening human skills.
CRITICAL THINKING – THE “NONSENSE” DETECTOR
A teacher set a single goal – not only to teach his students facts but also to develop their “nonsense detectors.” He wanted students to ask questions, analyze, and not settle for superficial answers. He cultivated in them an innate skepticism and a desire for truth.
Never before has this skill set been so necessary. AI generates information at an incredible speed – but without critical thinking, logic, or accountability. The World Economic Forum has called AI-induced misinformation a major global risk factor for the next two years. Moreover, a study by Microsoft and Carnegie Mellon shows that trust in AI reduces our own critical abilities.
ETHICAL DECISION MAKING
AI can be designed to align with human values. However, this will simply be a matter of programming. AI does not have neutrality – it absorbs biases, errors, and inequalities embedded in the data it learns from.
AI is already being used for harm: deepfake videos, fraud, misinformation. However, even AI with good intentions can lead to catastrophic consequences. The only defense is a human ethical decision that can intervene, correct, and question AI decisions before they become irreversible.
EMPATHY
AI is increasingly used for employee selection, human resource processes, and even lower management levels. The consequences are not always fair: automation can reinforce social inequality, with the rich getting human attention and the poor receiving only chatbots.
The more we automate processes, the greater the risk of losing our humanity. In the end, decisions that affect the lives of millions of people are more and more often made by an indifferent algorithm, rather than a human being.
New technologies can slip from our hands if we do not control them. We should not compete with AI. We must lead it by the hand.
