top of page

89 results found with an empty search

  • Am I scared of AI? I am

    Originally published in Manila Bulletin Am I scared of AI? I am. And this is precisely why I’m talking about it. Artificial intelligence (AI) is quickly reshaping every aspect of our world. It has already begun transforming critical sectors such as healthcare, finance, education, communication, creativity, and social connectivity. AI's potential to address complex global challenges, optimize processes, improve efficiency, and foster groundbreaking innovations is remarkable. We stand at the brink of an unprecedented era, promising revolutionary advancements that could enhance human life, eradicate longstanding diseases, and solve issues that have eluded solutions for generations. Yet, alongside these exciting prospects, a less optimistic scenario also looms. It's uncomfortable to confront, but essential to acknowledge: the risks AI brings. Its impact could be profoundly disruptive: affecting jobs, exacerbating inequalities, entrenching systemic biases, and even facilitating the manipulation of emotions, opinions, and decision-making processes on an unprecedented scale. The very technology we create to improve our lives could inadvertently undermine our freedoms and exacerbate existing societal divisions. Yes, this worries me deeply. AI’s core strength—its ability to rapidly learn, adapt, and make autonomous decisions—also represents its greatest vulnerability. The sheer complexity and scale of AI systems can easily surpass human understanding and oversight. How AI decisions are made often remains opaque, even to the engineers who design them. This lack of transparency is troubling, as it raises significant ethical, legal, and practical questions about accountability. Who exactly controls these sophisticated technologies, and whose interests do they ultimately serve? Furthermore, AI's transformative power extends into areas we might not immediately recognize. Algorithmic biases embedded in AI systems can perpetuate historical injustices and discrimination, resulting in unintended and harmful consequences. For instance, facial recognition technology has exhibited racial biases, and automated decision-making in finance or criminal justice can reinforce systemic inequalities. Without careful attention, AI risks amplifying the worst aspects of our society rather than uplifting the best. Consider language, for instance. AI struggles significantly with digital equity in language representation. While AI demonstrates remarkable proficiency in English, it falls short when dealing with less globally dominant languages. In the Philippines, AI’s limitations are clearly evident. Though it can understand and communicate in Tagalog, it cannot do so with the same fluency, nuance, or capacity it demonstrates in English. Attempting to converse with AI in Karay-a, a regional language spoken by millions, yields minimal or no response. This digital inequity highlights a critical challenge—ensuring technological advancements serve all communities equally, not just the dominant or widely recognized languages. Despite these valid concerns, my fear does not lead to rejection of technological progress. Instead, it fuels a motivation to engage actively and responsibly with AI's ongoing development. Silence, indifference, or passive acceptance will not equip us to leverage AI's benefits or guard against its potential pitfalls. Open dialogue, widespread public understanding, and informed engagement are vital. I talk openly about AI precisely because it deserves thoughtful discussion. This dialogue increases awareness, encourages responsibility, and ensures that AI development aligns with shared human values. Talking about AI invites us to collectively consider the implications of this technology, and how we can manage it ethically and effectively. Fear, in this context, is not debilitating—it is empowering. It compels us to remain vigilant and proactive, ensuring we address ethical, practical, and societal challenges head-on. Fear demands respect for the magnitude of our creation, cautioning us to tread carefully, guided by clear principles and informed by broad perspectives. To be scared of AI is not a sign of weakness; rather, it's a demonstration of foresight. Recognizing the magnitude of AI's influence should inspire thoughtful dialogue and active involvement. Our task is to shape AI's trajectory so that its benefits are maximized and its risks minimized. We must ensure AI remains a tool for human advancement, not a force that dominates or diminishes our agency. So yes, I am scared of AI. And precisely because I am scared, I will continue to talk about it, inviting you and everyone else to join this critical conversation. It’s only through open discussion, collective effort, and vigilant oversight that we can hope to guide AI toward a future that genuinely enhances our lives and respects our humanity. This opinion column is published under the Creative Commons Attribution 4.0 International License (CC BY 4.0). You are free to share, adapt, and redistribute this content, provided appropriate credit is given to the author and original source.

  • Algorithmic independence: Charting an AI agenda for Africa, Latin America, and Asia

    Originally published in Manila Bulletin Artificial intelligence is rapidly becoming the backbone of economic growth, public services, and even national security. Yet its benefits—and the power to set its rules—are overwhelmingly concentrated in a handful of wealthy nations. For countries across Africa, Latin America, and much of Asia, this imbalance not only stifles homegrown innovation but also risks perpetuating a new form of digital dependency. According to Tortoise Media’s 2024 “Leading 20 AI Countries by Research Capacity,” the United States tops the chart with a normalized score of 100, more than 45 points ahead of its nearest rival, China (54). No other nation even approaches these figures: Singapore registers 25, the United Kingdom 23, and both France and Switzerland sit at 18. Israel (17) and Germany (16) follow, while Canada and Hong Kong each score 15. Australia clocks in at 14, South Korea and the United Arab Emirates at 11, and India and the Netherlands at 10. Finland, Luxembourg, and Austria share a score of nine, with Japan and Sweden rounding out the top 20 at eight. These numbers lay bare the sheer scale of concentration at the apex of AI research—and the yawning gap that leaves entire regions on the periphery. Beneath these headline statistics lies another worrying divide: the gender gap in the talent pipeline. In both Europe and the United States, bachelor’s degrees in computer science remain heavily male-dominated. In Europe, women actually improve their representation at the master’s level—exceeding their share in undergraduate cohorts—whereas in the US their presence stalls, remaining roughly static from bachelor’s through master’s programs. This skew not only narrows the range of perspectives shaping AI’s future but also deepens the Global South’s reliance on technologies crafted without their input. For nations in the Global South, the practical implications are stark. Without local data centers and affordable high-speed connectivity, public institutions, and private startups must subscribe to foreign AI services at premium rates. That throttles experimentation on context-specific challenges such as precision agriculture in sub-Saharan Africa or multilingual disaster-response systems in Southeast Asia. Culturally, models trained on Western datasets can misinterpret local languages, dialects, and social norms, undermining their utility and, in some cases, causing harm. On a deeper level, this dynamic amounts to “algorithmic colonialism,” where essential decisions—about creditworthiness, healthcare priorities, or public-safety interventions—are governed by opaque systems designed and controlled thousands of miles away. True AI sovereignty demands a fundamentally different approach, centered on co-creation rather than consumption. First, governments, universities, civil-society organizations, and local entrepreneurs must collaborate on open-source model architectures that embed regional languages, ethical frameworks, and policy priorities from the outset. By pooling expertise and sharing code, these partnerships can yield tools that reflect lived realities—whether that means low-bandwidth sensors for smallholder farms or AI-powered platforms for indigenous-language education. At the same time, substantial investment in physical infrastructure is essential. Multilateral development banks, philanthropic foundations, and impact-oriented investors should prioritize financing for renewable energy–powered data centers, edge-computing hubs, and expanded broadband networks. Lowering the capital barrier for domestic research labs and startups will unlock local experimentation and reduce dependence on costly external services. Equally critical is nurturing a diverse talent pipeline. Closing the gender gap in computer science education requires targeted scholarship programs, mentorship networks, and outreach campaigns that encourage women and other underrepresented groups to pursue both bachelor’s and master’s degrees in AI-related disciplines. Engaging diaspora communities can facilitate knowledge transfer and offer a fast track for specialized expertise to flow back to home countries. Legal and regulatory capacity-building must proceed in parallel. AI sovereignty is as much about rule-making as it is about hardware. Technical assistance for drafting data protection laws, ethical AI guidelines, and frameworks for independent audit bodies will empower nations to set—and enforce—their own standards. Establishing regional centers of excellence can pool scarce legal and technical expertise, crafting model regulations tailored to shared cultural and economic contexts. Finally, lasting progress depends on sustainable financing models. One-off grants can spark initial activity, but blended finance vehicles that combine public, private, and philanthropic capital are needed to underwrite long-term research agendas. Tax incentives, matched-funding schemes, and prize competitions can further catalyze local innovation and give domestic stakeholders a real ownership stake in the AI ecosystem’s growth. Shifting from passive consumption of imported tools to active co-creation of both technology and governance is the only way to ensure that AI amplifies local ingenuity, safeguards human rights, and drives development on equitable terms. When nations of the Global South build their own AI infrastructures, train their own workforces, and write their own rulebooks, they reclaim the agency to decide what “intelligence” means—and whom it ultimately serves. This opinion column is published under the Creative Commons Attribution 4.0 International License (CC BY 4.0). You are free to share, adapt, and redistribute this content, provided appropriate credit is given to the author and original source.

  • 10 things about AI

    Originally published in Manila Bulletin In an era where algorithms curate our newsfeeds, drive our cars, and even assist in medical diagnoses, understanding artificial intelligence is no longer optional—it’s essential. AI isn’t an ethereal force; it is a collection of mathematical models trained on vast quantities of data. Like any powerful tool, it carries the promise of tremendous benefit and the risk of unintended harm. As these systems become ever more deeply woven into our daily lives, here are the 10 things about AI that everyone must grasp. First, AI is fundamentally mathematics in motion. Underneath the user-friendly interfaces and sleek product designs lie tensors, probability distributions, and optimization routines. Whether it’s a linear regression model predicting housing prices or a deep neural network translating speech in real time, these systems learn by adjusting numerical parameters to minimize error. Recognizing this demystifies AI: It is not magic, but logical structures applied at scale. Second, today’s AI is narrow, not general. Narrow AI excels in tightly defined tasks—playing chess, spotting tumors in radiology scans, or recommending movies—but flounders when taken outside its training domain. The concept of Artificial General Intelligence, or AGI, remains speculative. Despite sensational headlines, no system today possesses true human-like reasoning or adaptability. Third, data is both the lifeblood and liability of AI. High-quality, representative datasets empower models to recognize patterns accurately. But data can also encode the biases of the societies it reflects. If a training set skews toward one demographic group, the resulting AI may underperform or discriminate against others. Mitigating such bias requires careful data curation, fairness-aware algorithms, and ongoing audits. Fourth, transparency and explainability are not mere buzzwords; they are the bedrock of trust. In domains like healthcare or criminal justice, opaque “black-box” models can jeopardize lives or liberties. Techniques for explainable AI—such as attention visualization or rule-based approximations—help stakeholders understand why a model reached a certain conclusion, enabling accountability when things go wrong. Fifth, ethics and alignment should guide every stage of AI development. Aligning AI objectives with human values prevents scenarios where an AI, pursuing its programmed goal, inadvertently causes harm. This means engaging ethicists, technologists, and the communities affected by AI applications to ensure that systems respect privacy, fairness, and safety. Sixth, robustness matters because the real world is messy. Models trained under controlled conditions can fail catastrophically when faced with unexpected inputs—adversarial attacks, shifting markets, or black-swan events. Building robustness involves stress-testing models under diverse scenarios and establishing fallback mechanisms so that failures degrade gracefully rather than catastrophically. Seventh, the tension between data-driven innovation and individual privacy has never been more acute. AI-powered personalization often depends on collecting and analyzing personal data at massive scale, raising concerns about surveillance and consent. Emerging solutions—federated learning, homomorphic encryption, differential privacy—seek to reconcile utility with confidentiality, but regulatory frameworks must evolve in parallel. Eighth, AI’s economic and social impact will be profound. Automation promises efficiency gains in manufacturing, logistics, and even creative industries, but it also threatens to displace jobs and exacerbate inequality. Societies must invest in reskilling programs, social safety nets, and inclusive policies to ensure that the benefits of AI are broadly shared. Ninth, the global nature of AI development calls for cross-border cooperation. Research breakthroughs and data resources are dispersed around the world; nationalist approaches to AI regulation risk fragmenting standards and impeding progress. International dialogue—spanning academia, industry, and government—is vital to harmonize best practices, safeguard intellectual property, and prevent an arms race in autonomous weapons. Finally, governance and regulation must evolve as quickly as AI itself. Traditional legal frameworks often struggle to assign liability when decisions are made by algorithms. Forward-looking policies can establish clear lines of responsibility, require impact assessments, and incentivize transparency. At the same time, overbearing rules risk stifling innovation. Striking the right balance will demand ongoing engagement between regulators, technologists, and civil society. AI will reshape our world in ways both predictable and wholly unforeseen. By keeping these ten principles in mind—about what AI is, how it learns, where it excels, and where it falters—we can harness its potential responsibly. In doing so, we ensure that AI serves human values rather than eclipsing them. This opinion column is published under the Creative Commons Attribution 4.0 International License (CC BY 4.0). You are free to share, adapt, and redistribute this content, provided appropriate credit is given to the author and original source.

  • Language is the missing link in the AI revolution

    Originally published in Manila Bulletin These were the thoughts running through my mind on the train to London AI Week where I am slated to speak about a future where AI truly serves humanity. Around me, people spoke in German, Polish, Tamil, and English. The mix of languages reminded me that while the world grows more connected, the digital world remains linguistically lopsided. If you asked your phone in Polish for a definition of multimodal AI, you might get a passable response. Ask in Tamil, and it may struggle. Ask in Quechua or Wolof or Karay-A, and you’ll likely get silence. This isn’t just a technical quirk—it’s a quiet crisis. We’ve made tremendous progress expanding internet access, improving device affordability, and building digital skills across the globe. But one enormous gap remains: most of the world cannot speak to AI—and cannot be heard by it—in their own language. There are 7,159 living languages spoken today. As of 2023, fewer than one percent of them are fully supported by AI systems. Only 32 languages were classified as “digitally thriving” in Ethnologue’s Global Digital Language Support Scale. Just 108 more were deemed “vital.” That leaves over 7,000 languages with limited or no access to speech recognition, machine translation, text-to-speech tools, or even reliable typing interfaces. This means that while AI is marketed as a universal tool, it is still speaking mostly to the privileged linguistic elite. The majority of humanity remains on mute. The consequences are more serious than many realize. In a world increasingly driven by digital interaction, language is the gatekeeper. If your language isn’t recognized by the technology, then you are, in effect, locked out. Public health campaigns fail when vital information isn’t communicated in the local language. Disaster alerts lose their power when they’re issued in a tongue the local population doesn’t read fluently. Educational platforms and e-governance tools are out of reach when the interface speaks in unfamiliar words. Economic opportunity is stunted when you can’t sell, search, or serve in the language your customers actually use. And then there’s the cultural cost. Languages are not just tools of communication—they carry stories, songs, rituals, and knowledge systems. When a language lacks a digital footprint, it becomes invisible to the algorithms and models shaping our shared reality. And invisibility, especially in the AI age, is a fast track to extinction. Today, over 3,200 languages are considered “still”—barely surviving, with limited usage and almost no digital representation. Another 3,200 are “emerging,” spoken by nearly a billion people but similarly unsupported. These languages teeter on the edge. Their speakers may remain vibrant communities, but without digital relevance, their future is fragile. An entire generation could grow up unable to text, code, search, or even type in their own language. This should not be inevitable. We already have open-source platforms that allow communities to collect their own voice data. We have multilingual AI models capable of learning from smaller datasets. And we have a growing global awareness of the need for more inclusive AI. But awareness alone is not action, and market-driven development alone will not solve this problem. What we need is a shift in mindset—from seeing language support as a technical feature, to recognizing it as a matter of digital justice. AI companies must treat linguistic inclusion not as an afterthought, but as a core responsibility. Governments must include language coverage in their national digital strategies. Philanthropic funders should prioritize digital language equity with the same urgency they give to literacy, education, or internet access. Because if AI is to truly serve humanity, it must reflect humanity—not just in data points, but in words, voices, and tongues. As I stepped off the train and into London AI Week, I realized that the future of AI isn’t just about what it can do. It’s about who it will speak to—and who it will listen to. Right now, far too many people are being left out of the conversation. It’s time we changed that. This opinion column is published under the Creative Commons Attribution 4.0 International License (CC BY 4.0). You are free to share, adapt, and redistribute this content, provided appropriate credit is given to the author and original source.

  • Democratizing AI: A call for digital liberation

    Originally published in Manila Bulletin I have long warned that AI’s rapid advancement could parallel historical patterns of colonization. If AI truly represents a black swan event—a disruptive moment in history—then we must confront what happens when 99 percent of the world’s languages are left behind. This is far more than a linguistic concern; it strikes at the heart of accessibility, representation, and digital equity. If we do not change who leads AI development, we risk inaugurating a new era of digital colonization, where only a privileged few command the tools to shape tomorrow. Language is the gateway to identity and knowledge. Every tongue carries unique stories, scientific insights, and cultural practices. When AI models are built almost exclusively on English, Mandarin, or Spanish, speakers of Quechua, Karay-a, Wolof, or Māori—and thousands of others—are effectively barred from AI-powered education, healthcare guidance, and local governance tools. This digital exclusion mirrors the colonial imposition of a ruler’s language, which erased indigenous voices across continents. To democratize AI, we must embed multilingual capabilities at its core, ensuring that every community can interact with, contribute to, and benefit from intelligent systems. But democratization cannot stop at translation. Accessibility demands that AI systems recognize diverse faces, speech patterns, and lived experiences. Today, facial-recognition algorithms struggle to identify darker skin tones accurately, and voice assistants stumble over non-standard accents. Such failures aren’t mere technical glitches—they reinforce social hierarchies by privileging users who already exist within narrow design parameters. To break this cycle, development teams must reflect the world’s rich diversity, drawing talent from underrepresented regions and cultures. Only then can we train AI on datasets that capture the full spectrum of human experience. Who controls the narrative of AI matters as much as the technology itself. If a handful of Western tech giants dictate the research agenda, we will see more surveillance applications, targeted advertising, and productivity tools shaped by corporate profit motives rather than public interest. Democratizing AI means inviting policymakers, ethicists, community leaders, and engineers from the Global South, indigenous nations, and marginalized communities into decision-making arenas. These stakeholders are best positioned to identify local challenges—be it small-holder farming, maternal health in remote villages, or preservation of endangered plant wisdom—and to guide AI toward serving those needs. A truly democratized AI ecosystem is inherently multipolar. Instead of funneling all research toward a small number of massive, monolithic models, we should champion open-source frameworks, federated learning networks, and regional innovation hubs. Envision universities in Accra, São Paulo, and Kathmandu each hosting AI platforms tailored to their linguistic and cultural contexts. Governments can mandate that public-sector AI tools be open by default. Philanthropic organizations can sponsor community-driven data-collection initiatives. Startups can thrive under policies that reward privacy-preserving, inclusive design. This decentralized approach not only accelerates innovation but also guards against the concentration of power and potential misuse. Yet software alone will not suffice. True digital equity requires robust infrastructure. Affordable broadband, local data centers, and edge computing capabilities must reach rural hamlets and underserved urban neighborhoods alike. Public–private partnerships should focus on last-mile connectivity and subsidized devices for schools, clinics, and cooperatives. Without these investments, even the most inclusive algorithms remain out of reach for those who need them most. Finally, democratization must be anchored by ethical guardrails and community governance. Transparent audit trails, citizen assemblies, and enforceable data-privacy rights are essential to prevent the rise of a corporate or state-controlled digital empire. We must design participatory frameworks that give local communities real say over data collection, model training, and application deployment. This is how we ensure AI serves the collective good rather than narrow interests. Decolonization once reshaped political and cultural sovereignty. Today, we stand at the threshold of digital decolonization. By weaving diverse languages, perspectives, and priorities into AI’s very fabric, we can transform it from a tool of exclusion into an engine of global empowerment. The stakes could not be higher: if 99 percent of the world’s languages and the human wisdom they carry are silenced, AI will have merely replicated colonialism in binary code. But if we democratize AI now, we can write the next chapter as a story of collaboration, resilience, and shared prosperity—one in which every community has the tools to thrive on its own terms. This opinion column is published under the Creative Commons Attribution 4.0 International License (CC BY 4.0). You are free to share, adapt, and redistribute this content, provided appropriate credit is given to the author and original source.

Search Results

bottom of page