Today’s tech ecosystem seems to be the future that we have all been envisioning since films about robots taking over the world were screened in cinemas, as artificial intelligence (AI) becomes a household name and enters the business mainstream.
While we are likely miles away from AI dominating the planet, fears have mounted in recent years over its capacity to exacerbate existing issues, particularly disinformation, bias, inequality, and now, deepfakes.
With AI’s immense potential to disrupt industries, even the global workforce sector is rattled by its potential to one day replace human labour, a belief that holds true despite some reports assuring the general public that it is not.
The Human Cost of Machine Efficiency
According to a joint study by the United Nations’ International Labour Organisation (ILO) and Poland’s National Research Institute, one in four jobs worldwide is in danger of being impacted by AI. However, the research did not pertain to job loss but to job transformation, as automation with generative AI becomes rampant. High-income countries are the most susceptible at 34%, while low-income countries stand at 11%. This is based on some 30,000 real-world job descriptions using worker surveys, expert surveys and AI analysis.
The growing adoption of AI among businesses was cemented in a separate research conducted by management consulting firm McKinsey & Company at the end of 2024, revealing an uptick in organisations adopting AI in business functions. Specifically, 78% of the respondents reported utilising AI in at least one business function, up from 74% in early 2024 and 55% in 2023.
The McKinsey study noted that polled organisations use AI widely in their marketing and sales (55%), product or service deployment (39%), information technology (31%), service operations (30%) and knowledge management (26%).
As AI increasingly creeps into businesses and enterprises, there has also been a notable rise in upskilling their workforce to ensure they remain relevant and can ride the wave of this transformation in the labour market. Experts project that by 2030, almost three-quarters of the skills people use at work today will be different as AI advances. That means many workers will need to learn new abilities or adapt existing ones as jobs evolve.
While these studies soothed people’s fears about AI replacing jobs, recent layoffs in the tech industry demonstrated that this threat remains. Some of the largest tech corporations, including Microsoft, Meta, Amazon and Alphabet, have injected significant capital into AI data centres to fund their AI-driven initiatives while choosing to lay off thousands of their workforce, a dead giveaway of how this technology is reshaping the labour market.
But mass dismissals can’t be blamed on AI alone; they also stem from broader waves of innovation. To give a better picture, from January to July 2025, approximately 80,116 employees were dismissed across the tech industry. So far, this is lower than the more than 150,000 registered by the end of 2024 across 549 companies.
Given these figures, it’s understandable that many remain cautious about fully embracing AI, not only in enterprises but also in legal settings. But the fear isn’t just about jobs disappearing but also about how easily such a powerful tool could be misused.
AI’s Hidden Risk: Innovation Without Ethics
AI outperforms humans in data processing and executing tasks and commands with speed and precision, making it a reliable operational tool, especially in a demanding work setting. Yet, despite its current state, it lacks a distinct human quality that it cannot replicate: ethical judgment.
Humans are guided by ethics, which allow them to become morally responsible – to differentiate right from wrong and uphold accountability. But machines lacking this capability, raising the question of where responsibility lies if and when AI goes wrong.
AI ethics are not innate to large language models (LLMs), with humans only feeding it the data it needs to learn patterns, generate predictions, and perform tasks effectively. This not only highlights that AI’s ethical performance is heavily reliant on humans’ guidance and oversight, but also how crucial data is in this digital age.
Rather than fuelling fears about advanced technologies like AI and feeding some people’s delusions about machines taking over jobs and industries, corporations and developers should treat AI accountability as a shared responsibility.
This responsibility also falls on governments, which must set clear rules and safeguards so AI is built transparently, checked regularly, and kept fair, helping prevent irreversible damage. However, rules and regulations are not enough on their own; ordinary users of AI systems should also be part of the conversation, as they have the power to shape how AI can be used ethically.
While human intervention plays a crucial role in upholding AI ethics, it is also essential to explore other emerging technologies that can reinforce this collective responsibility, and blockchain may just be the most ideal tool to pair with AI.
Blockchain as AI’s Moral Compass
Blockchain is not a technology to be sidelined, and while it is less prominent than AI, its capabilities are not to be underestimated, offering qualities that AI lacks – transparency and traceability – and it might just be the solution for the “black box” problem.
Because of AI’s complexity, tracing how a system makes specific decisions can be challenging. This lack of transparency makes AI susceptible to hidden biases and raises concerns about fairness, accountability and trust in its outcomes.
Blockchain, as an immutable ledger, can store heavy data inputs, decision paths, and system updates, creating an audit trail that cannot be altered. This makes AI’s “black box” more understandable and ensures accountability when things go wrong. Such documentation makes blockchain an ethical safeguard for AI systems, reducing the risks of hidden manipulation and biases.
Integrating blockchain into AI systems is increasingly important as the latter evolve beyond simple chatbots to roles that directly impact people’s daily lives, such as hiring, healthcare, and finance – sectors where biases and discriminations are prevalent. By harnessing both tools simultaneously, organisations can provide verifiable proof that decisions were made fairly and based on trustworthy data, further reinforcing ethics and public trust.
Bringing blockchain into the realm of AI goes beyond championing ethical AI standards; it helps shift the conversation from fear to empowerment and turns scepticism into trust, encouraging further innovation as stakeholders gain a powerful system where accountability is built in.
On a broader scale, combining AI and blockchain could help lay the foundation for global standards in responsible technology. Since neither of these systems is limited by borders, they provide an opportunity for governments, businesses, and organisations to work together on common rules for openness and responsibility. This teamwork could ensure that using AI ethics becomes a worldwide standard, not just a choice for companies.
While it is evident that AI has immense power to reshape how we operate as society, the future cannot be built on speed and efficiency alone. AI’s synergy with blockchain is pivotal in developing a future where trust, transparency and accountability flourish and innovation aligns with ethical responsibility. This integration ensures that progress does not come at the cost of fairness or morality.
See blockchain and AI in action and discover how they can work together to build a more ethical and transparent future by joining us at the London Blockchain Conference on 22-23 October. Register here and be part of the global conversation shaping the next era of technology.