|
By: Dinh Thai Bao Tran
Artificial intelligence (AI) is changing the way people live, work, and connect with each other. From translation apps to environmental monitoring, AI has become a big part of global development. However, for Indigenous communities, this new technology brings both dangers and possibilities. AI can help preserve languages and support community empowerment, but it can also repeat old injustices, take away cultural knowledge, and threaten sovereignty. Understanding both sides of AI is important if we want a digital future that is fair and respectful of Indigenous knowledge. AI systems are trained with large amounts of data that mostly reflect Western values and worldviews. Because of that, many AI programs have built-in bias that can misrepresent or harm Indigenous people. For example, facial recognition tools often make more mistakes identifying people of color and Indigenous people than white users (Buolamwini & Gebru, 2018). These mistakes are not just “technical errors” they show how power is uneven in data collection and design. AI can also lead to cultural appropriation and data misuse. Many Indigenous songs, images, and languages are taken from the internet to train AI tools, often without permission or credit. This is a kind of digital colonialism; where what is being taken is not land but knowledge and culture. As the Indigenous AI Working Group (2020) warns, when AI systems are trained on Indigenous data without community control, it continues “a form of knowledge violence.” Another serious concern is surveillance. In the name of “security” or “environmental protection,” governments and corporations have used AI to watch Indigenous activists. During pipeline protests in North America, police used drones and data analytics to track Indigenous demonstrators These actions damage trust and repeat colonial control only now through technology. Even with these risks, many Indigenous communities are using AI in creative and positive ways to protect their cultures and rights. A clear example is language revitalization. All around the world, AI programs are helping record, translate, and teach endangered Indigenous languages. AI can also help monitor and protect the environment, which is deeply connected to Indigenous sovereignty. In the Amazon, the Waorani community has used AI-based mapping tools to find illegal logging, combining traditional knowledge with modern science. These projects show that technology does not have to be against Indigenous culture it can strengthen their role in protecting the Earth. AI can also help reduce the digital divide, bringing education, healthcare, and information that fit each community’s culture. For example, AI translation tools can help Indigenous students learn in their own languages, and medical AI systems can adapt to their specific health needs. When technology is guided by ethics and cooperation, AI can become a tool of empowerment, not oppression. At the heart of this issue is the idea of Indigenous Data Sovereignty the right of Indigenous Peoples to control how their data is collected, owned, and used. Respecting this right means:
As Indigenous technologist Dr. Jason Edward Lewis said, “Some knowledge is sacred — not everything that can be coded should be coded.” (Lewis, 2020). True innovation only happens when humans build technology with humility, gratitude, and respect for its limits. AI itself is neither good nor bad — it reflects the values and goals of the people who create it. For Indigenous communities, the challenge is not only to resist AI but to re-imagine it in fairer, more inclusive ways. By claiming control over their data, setting cultural protocols, and designing systems that reflect Indigenous worldviews, communities can make AI a tool for justice rather than exploitation. The future of AI and Indigenous rights depends on relationships between people, between knowledge systems, and between technology and the Earth. AI can be used to exploit, but it can also be used to protect and connect. If guided by respect, collaboration, and environmental awareness, AI can heal instead of harm. The question is not how AI will shape Indigenous futures, but how Indigenous values can shape the future of AI. SOURCES:
0 Comments
By: Owen Jenkins. Owen's personal beliefs about human rights extends to an evolving understanding of indigenous rights, the past and present ways in which our nation has failed indigenous communities, and the ways in which we can pave a way forward that is equitable and just.
With the rise of Artificial Intelligence (AI), there has been increasing concern about the integrity and reliability of AI, and the accuracy of the information that emerges from the pool of data from which AI generates the information that is then passed on to users. When someone submits a prompt, it takes information from a vast amount of varied and sometimes conflicting inputs across the internet, with some corners of the web being filled with racist and sexist articles and ideologies. When users get their information from AI generated prompts, they are fed what they perceive to be convincing evidence to support their claim. Individuals that post their opinion, whether factual or not, onto online platforms have an influence on AI’s understanding and resulting output. This leaves open the potential for misinformation to spread like wildfire across the web, influencing AI’s understanding of the world that we live in today. While the terms of service for many social media platforms allow for posts to be flagged for misinformation, there’s a vast amount of information being shared that will go undetected. AI technology can therefore be used to harm Indigenous culture, given how, for tens of decades, Indigenous People have been poorly represented in movies and media, to name a few, and this failure to accurately portray and represent indigenous culture and history will have an affect on how AI then perceives native people and culture, and more easily pass on misinformation to the public through AI inquiry. In an AI generation study, a man by the name of Sean Dudley had been generating how AI views what it thinks a Native American town looks like. When generating a Southwest American town, it showed a modern looking town 70 miles from Tuba City, which is the biggest social group in the Navajo Nation, with modern cars and neatly kept roadways. However, when they put in the same prompt with Tuba City, the AI image that was generated looked dilapidated, with junky old cars, dirt roads, and sheds for houses, compared to the more modern and thriving look for first prompt. This study shows how severely biased opinions about native culture can sneak into AI prompts. In reality, Tuba City is a mixture of beautiful landscapes, historic structures, and small town houses. Nothing like the sad image that AI generated for this study. This kind of misinformation can alter the way that we come to understand an entire group of people and culture (Dudley et al, 2024). AI, however, is not always a bad tool. In other ways AI supports the proliferation of information that can be useful for enhancing our understanding of indigenous culture. For example, it can be used to keep Indigenous languages alive. This would mean a lot for linguistic revival for languages that have become endangered. In an analysis by Dr. Jared Coleman, a translation tool developer at Loyola Marymount University, he says “For endangered, no-resource languages, creating translators is challenging, and accuracy is even more critical … for this reason, our goal isn’t to produce perfect translations but to generate accurate ones that closely capture the user’s intended meaning.” The other side of that coin, however, is closely related to the previously stated concern that AI may not accurately present the language and interpretations as truth, in the event that misinformation is spread consistently across the internet (Jiang, 2025). The world is changing quickly and with it our use of technology. As with our past, we are accountable for how we remember and understand each other in this world. This responsibility rests on the shoulders of each individual, and therefore we must be cognizant of the ways in which we rely on and trust in the technologies that shape our understanding of the world. References
By: Asli Gutierrez
Indigenous languages are central to the identity of Indigenous people, the preservation of their cultures, worldviews and visions and an expression of self-determination. But many of these languages are at risk of disappearing: by 2050 around 20 Indigenous languages are going to be left, specifically in the United States. This raises an important question: could Artificial Intelligence (AI) help to keep indigenous languages from extinction? While AI offers hope by saving these languages, there are also risks for these Indigenous communities, especially their languages. This article will highlight both; opportunities and risks of AI in preserving Indigenous languages. While AI has great potential to preserve Indigenous languages, there are significant risks for Indigenous communities. First, it could harm Indigenous communities by misunderstanding languages, and undermining cultural knowledge, because of how AI is trained on incomplete or inaccurate datasets. As a result, AI can produce translations, or interpretations that are wrong and misleading. This contributes to spreading misinformations, and undermining efforts to preserve and respect Indigenous knowledge, and linguistic diversity. As Danielle Bayer explains on the podcast Can AI save endangered Indigenous languages? "Even if it's 100% correct, I still worry that it might lack important context." Second, AI cannot replace human’s ability to communicate. Indigenous languages are traditions that are alive, meant to be spoken and shared. Learning from elders or other community members is essential to keep the languages alive and meaningful. Without human connection, AI might turn living languages into something artificial. Words without context or culture that give them life. Despite the risks, AI can be a powerful ally for Indigenous communities to keep their languages alive. First, AI can be used to record and instruct languages, just like Walking in Two Worlds explains how First Nations computer programmers are developing AI technology explicitly to assist in preserving endangered languages, having them recorded and accessible for future generations. Similarly, Danielle Boyer's Scobot robot engages with children in Nishinaabe, speaking in children's language. By making learning languages interactive, AI can help preserve these living traditions. Secondly, AI can enable community-led and moral language conservation strategies. UNESCO emphasizes Indigenous agency over AI technologies and data to make sure communities have power over the use of their languages and their propagation. Doing this prevents exploitation, respects cultural knowledge, and ensures AI will strengthen rather than harm Indigenous sovereignty. Finally, AI can be employed in reaching the younger generations and making language learning enjoyable. Peter-Lucas Jones notes that AI can help engage youth in bonding with their heritage while learning, exposing more people to learning languages and making it enjoyable. Through apps like Scobot, children not only learn their language but also interact with technology that represents their culture, custom, and community identity. By embracing innovation with virtue, AI can be a great ally in maintaining Indigenous languages. In conclusion, AI is a tool for Indigenous languages with its risks. It can take human connection that makes Indigenous languages feel alive, when if it is being used correctly and responsibly; it can help to preserve languages, and support communities to keep their culture strong. The main point to make it work is to make sure Indigenous communities are the ones guiding how AI is used. Projects like Danielle Boyer’s Scobot show that, with care and community involvement, AI can help to keep languages alive and meaningful for next generations. References
Risks of AI for Indigenous Peoples (bias, surveillance, cultural appropriation, data exploitation).11/2/2025 By: Arbay Abdulahi
I am a contributor to the earth daughters’ campaign, and I believe that artificial intelligence that is not looked over can not be a futuristic tool for progress, and I believe that it is a threat of digital colonialization that will be or could be something that erases the identity of indigenous peoples, and their knowledge. As of right now Ai has been everywhere, AI has so many negative effects, for example, it can have misinformation and people use it for data entry, and to manage systems. And for indigenous people AI brings a threat because right now AI is acting like a new colonial power by taking what’s not theirs and making it into something different. This is not only for technical issues but it’s also a fight for human rights and environmental justice. I would say that one of the biggest problems with AI is that it gets all its information from data collecting. Artificial intelligence leaves out and wrongfully judges indigenous communities, AI is biased and when it is used by the government or agencies it creates real harm. For example, in the article “Artificial intelligence and indigenous people’s realities” by Nina Sangma she says, “For indigenous peoples the dawning age of Artificial intelligence threatens to exacerbate existing inequalities and violence.” This proves that AI is a threat and that Artificial intelligence doesn’t just create new problems, but it makes the problems that already exist even worse because it works very fast and on a large scale. The systems that control artificial intelligence are risky for indigenous people due to high-risk surveillance. For example, facial recognition is trained and used by data that poorly represents indigenous people, they are wrongfully identified, falsely accused, and unfairly targeted. The use of AI tools makes existing discrimination a lot worse by letting authorities have ways to watch and control marginalized communities. In the article “Biased Technology: The Automated Discrimination of Facial Recognition” by Rachel Fergus, it states that “Law enforcement and the criminal justice system already disproportionately target and incarcerate people of color. Using technology that has documented problems with correctly identifying people of color is dangerous.” And this proves that indigenous people face a double burden of risk and that adding an AI technology like face recognition that is not always correct is dangerous. In conclusion biased AI and the digital theft of culture are major new dangers to indigenous peoples. AI had been actively hurting indigenous rights and damages their unique cultural integrity by using their information without permission. For the earth’s daughters’ campaign their land also means that we must fight for fairness in technology. We need to stand up for indigenous sovereignty and make sure that AI stops trying too digitally takeover everything and in order to be able to do that I believe that we must all work together to figure out how we can take control to manage this situation. References
By: Ali Abdiaziz Artificial Intelligence (AI) is changing how we live—from communication and education to environmental protection. But for Indigenous Peoples, AI is a double-edged sword. It can be used to continue harmful colonial practices, or it can support language, land, and cultural preservation. The impact depends on who is in control, and whether Indigenous rights are respected. The Risk: Digital Colonialism and Bias One major issue is data colonialism. Many AI systems are trained using data taken from the internet, archives, and public sources—often including Indigenous stories, images, languages, and land information—without consent. This is a digital form of exploitation, where outsiders use Indigenous knowledge for research or profit, while communities get little say or benefit. Facial recognition is another example. A 2018 study by Buolamwini and Gebru found that these systems often misidentify people of color. For Indigenous people, who already face over-policing in countries like Canada and Australia, this technology can increase the risk of surveillance and criminalization. AI is also used in environmental mapping and planning, often by companies looking to extract resources. When these tools ignore Indigenous land rights or traditional knowledge, it can lead to displacement and environmental damage (Mohawk & Smith, 2021). These technologies, though advanced, still carry the same old patterns of exclusion. The Opportunity: Language and Land Protection Despite these risks, Indigenous communities are using AI in powerful ways. Language revitalization is one key area. Many Indigenous languages are endangered, but AI can help document and teach them. For example, the First Voices project in British Columbia uses digital tools to preserve and share Indigenous languages online. These efforts are led by communities and help keep cultural identity strong for future generations (First Peoples’ Cultural Council, 2022). AI is also helping protect land. In the Amazon, Indigenous groups are using satellite images and machine learning to monitor illegal logging in real time (BenYishay et al., 2021). This combination of tech and traditional land knowledge shows how AI can support environmental justice when used ethically. The Solution: Indigenous Data Sovereignty To make AI work for Indigenous Peoples, their data rights must be respected. This is known as Indigenous data sovereignty—the right of Indigenous nations to control their data. The CARE Principles were created to support this: Collective Benefit Authority to Control Responsibility Ethics (Carroll et al., 2020) These principles remind researchers and developers to prioritize Indigenous leadership, consent, and values when working with data or technology. A Shared Future AI is not neutral. It reflects the priorities of the people who create it. If Indigenous Peoples are included—and respected—AI can become a tool for healing and empowerment. The future of technology must be built with Indigenous voices at the center, not the margins. References
Bio: Ali Abdiaziz is a student passionate about Art, technology, and Indigenous rights.
By Sequoia Wells
Sequoia Wells is a writer and student researcher focused on the intersections of technology, environmental justice, and Indigenous rights. Artificial Intelligence (AI) is rapidly shaping the world we live in—transforming communication, education, and environmental monitoring. For Indigenous communities, however, the growth of AI presents a double-edged sword. While it holds potential for language revitalization and cultural preservation, AI also poses serious threats to Indigenous sovereignty, data rights, and environmental justice. Historically, Indigenous communities have been excluded from conversations about the technologies that impact them. Today, AI systems are being trained on unregulated and culturally insensitive data—extracted without consent or context. This mirrors colonial practices of knowledge exploitation, where sacred traditions and languages are taken, commercialized, and used without returning benefits to the communities they originate from. One alarming example involves the Lakota Language Consortium, a nonprofit that entered the Rosebud Sioux Tribe with promises of collaboration. However, after gathering recordings and resources, the organization copyrighted the materials and attempted to sell them back to the community—undermining trust and violating the principle of Indigenous data sovereignty (The Take, 2025). Indigenous educator and robotics designer Danielle Boyer highlights how governments and corporations want access to Indigenous knowledge—but not to connect it to Indigenous people themselves. Her work creates culturally responsive robotics programs for Indigenous youth, designed with intention, accuracy, and community permission. Unlike commercial Large Language Models (LLMs) like ChatGPT, her systems are built to connect, not extract. “What’s the point of language if not to connect people?” she asks. AI trained on scraped internet data cannot replicate the cultural nuance, context, and meaning embedded in Indigenous languages and traditions (Boyer, 2025). Beyond cultural erasure, AI’s environmental footprint also harms Indigenous land. Training large-scale AI models consumes vast amounts of energy, minerals, and water—often sourced from Indigenous territories without consent. According to the University of Bonn’s Institute for Science and Ethics, we must differentiate between “AI for sustainability” and the “sustainability of AI.” While AI can support climate solutions, it must not do so at the cost of the communities already leading environmental stewardship (van Wynsberghe, 2024). AI also risks reinforcing stereotypes. A 2025 article by the Navajo Times revealed that image generators produced harmful, inaccurate depictions of Navajo people—such as “mystical smoke” prayer circles and struggling students—highlighting the unchecked bias embedded in AI training data. As Navajo-Hopi filmmaker and scholar Angelo Baca emphasizes, Indigenous people must be able to protect their stories, images, and voices. The Indian Arts and Crafts Act of 1990 was designed to prevent cultural misappropriation in commerce, but it does not yet extend to the AI space (Baca, 2025). So where do we go from here? The path forward requires AI systems that respect Indigenous rights, governance, and cultural integrity. That means enforcing meaningful consent, honoring data sovereignty, and investing in community-led innovation. Indigenous people must be active participants—not passive data sources—in designing the future of technology. AI is not inherently bad. But when developed without ethical guidance, it becomes a modern tool for age-old systems of extraction and exclusion. We must ask ourselves not just what AI can do, but who it should serve—and on whose terms. References (APA 7th edition)
By: Ismahan Salat
I’m Ismahan Salat from Burien, WA. I care about environmental justice and supporting Indigenous communities in protecting their culture, land, and rights. AI is everywhere. It’s supposed to make life easier but for Indigenous communities it’s tricky. It can either help protect culture and land or mess things up even more. The difference comes down to who’s in control and whose voices actually matter. AI can seriously mess things up if it’s biased or used without permission. Most AI is trained on data that ignores Indigenous languages, stories, and histories. That means it can erase perspectives, spread stereotypes, or just get things straight-up wrong. Facial recognition technology for example misidentifies people of color way more than white people which could put Indigenous activists at risk of surveillance (Buolamwini & Gebru, 2018). Cultural appropriation is another problem. AI scrapes songs, art, and stories online without asking. That’s knowledge stolen out of context and it ignores Indigenous rights. The Indigenous AI position paper calls this “digital colonialism” because it’s basically repeating old patterns of taking what isn’t yours (Lewis et al., 2020). But AI isn’t all bad. If done right it can actually help. One of the coolest things is saving Indigenous languages. Many are disappearing but AI tools can help preserve them. Canada’s FirstVoices platform uses AI to make dictionaries, learning tools, and archives all controlled by Indigenous language experts (First Peoples’ Cultural Council, 2023). AI can also help protect the environment. Indigenous nations are often fighting to defend land from climate change and resource extraction. AI can track deforestation, water pollution, and animal migration giving communities proof to protect their territories and traditions. The real key is who controls the data. Indigenous data sovereignty means communities decide how their information is used. That way AI works for them not against them. The Global Indigenous Data Alliance created the CARE Principles Collective benefit, Authority to control, Responsibility, and Ethics to guide AI in a way that actually respects communities (GIDA, 2019). AI isn’t neutral. It reflects the people who make it. For Indigenous communities it can either repeat harm or create new opportunities to protect culture, land, and language. The future depends on centering Indigenous voices, giving communities control, and building AI that actually respects their worldviews. If we do it right AI can help Indigenous Peoples survive and thrive instead of being another way to erase them. References
By: Marissa Heard
Marissa Heard is a birthworker, writer, and community advocate trained by Black American and Indigenous midwives, committed to uplifting their voices and carrying forward the traditions that shape her work. The AI revolution looks new, but its roots reach deep into old colonial soil. Behind the flashy branding of artificial intelligence lies a familiar pattern of control, extraction, and dispossession. For Indigenous Peoples, AI is not a neutral tool, it’s the newest face of an old colonial story and, unless challenged, these systems will not liberate. Instead, they risk colonizing by carrying centuries of inequity into a digital future. Technology has long been tied to systems of power; however, it also carries potential when wielded with sovereignty. The question isn’t whether AI will transform our world, but whether that transformation repeats history or creates liberation. Across the globe, Indigenous communities are pushing back against the idea that AI is neutral. Facial recognition, predictive policing, and biased algorithms reinforce inequities that Black, Indigenous, and people of color already live with (Walter & Kukutai, 2018). Even language and cultural knowledge are at risk of being scraped into AI databases without consent, packaged as innovation while repeating centuries of theft (Cultural Survival, 2023). These are not accidents. They are symptoms of a colonial mindset that treats knowledge as another resource to be mined. Algorithms reflect the worldview of their designers, who overwhelmingly are not Indigenous, and therefore embed the same systemic inequities into digital systems (Walter & Kukutai, 2018). Still, Indigenous-led projects show what it looks like to disrupt this cycle. Te Hiku Media, a Māori-led initiative in Aotearoa, has used AI for language revitalization on their own terms, protecting cultural data from exploitation (Pulitzer Center, 2023). In addition, Aboriginal technologists in Australia are developing digital storytelling tools rooted in cultural protocols, challenging dominant narratives of innovation as extraction (Walter & Kukutai, 2018). These efforts demonstrate that AI can be reclaimed as a tool of sovereignty rather than surveillance. Frameworks like CARE (Collective benefit, Authority to control, Responsibility, Ethics) and OCAP (Ownership, Control, Access, Possession) provide blueprints for ethical data practices rooted in Indigenous governance (Carroll et al., 2020). They remind us that knowledge, like land, hav to be governed with respect, consent, and accountability. Indigenous Data Sovereignty (ID-SOV) movements such as Maiam nayri Wingara in Australia and Te Mana Raraunga in Aotearoa are advancing this vision by demanding that Indigenous peoples be the decision-makers over how their data is created, stored, and used (Walter & Kukutai, 2018). These principles disrupt the myth of AI as neutral and push back against the notion that data is a free resource for corporate or state exploitation. Imagining Indigenous futures with AI means centering sovereignty, not sidelining it. Imagine AI tools that track illegal logging under the authority of Indigenous rangers, or digital archives that return stories and songs to their communities instead of locking them in corporate databases. AI can be a tool of liberation or a weapon of oppression; the difference lies in whose hands shape its purpose. Indigenous communities are already leading the way toward futures where AI strengthens, rather than erases, culture and survival. Whether the world will join them in building that future or repeat the mistakes of colonial history in digital form, remains the true test of this technological revolution. References:
By: Hamdi Elmi
Hamdi is a running start student interested in learning and fighting for environmental issues and the Indigenous peoples. Artificial Intelligence (AI) may be regarded by some as new or exciting, but for Indigenous Peoples, it often feels like yet another repetition of history. Technology is not simply a tool; it has the potential to become yet another instrument to take from, surveil, and harm Indigenous Peoples when used incorrectly. The question is whether AI will merely repeat the same old harms or whether those harms will be transformed, in ways that really support Indigenous futures and create important solutions and changes. AI systems are never neutral; biased data is used which can form the basis of AI systems, where AI systems reflect the biases of those who are using it. Indigenous communities have already faced surveillance, criminalization, and similar harms related to predictive policing and security practices. Cultural appropriation is yet another risk as AI systems can replicate and appropriate Indigenous designs, songs, and stories without consent. Where sacred knowledge is the premise of “data,” it loses any meaningful relationship with the actual people connecting and creating it. It is important to recognize that although these risks exist, Indigenous communities are already demonstrating the potential for technology to be used positive within Indigenous cultural structures. Language revitalization provides a powerful example. In Canada, FirstVoices is developing digital applications that strengthen Indigenous languages with community control (First Peoples’ Cultural Council, 2020). In Aotearoa, Māori-led Te Hiku Media has developed language learning tools in a culturally appropriate manner to maintain protections on cultural knowledge (Lewis, Arista, Pechawis, & Kite, 2021). These show that technology can help indigenous culture when used properly and positively which is important in order to receive the benefits. These initiatives are related to Indigenous data sovereignty, which stand by that Indigenous Peoples have a right to control data about their peoples, lands, and knowledge. The CARE Principles explain that and provide tools for making sure it happens (Carroll, 2020). Rather than using data as something anyone can take, Indigenous data sovereignty treats data as living knowledge that must be cared for and protected. This consideration of data may extend to environmental protections; for example, using digital applications to track illegal logging or measure climate impact by Indigenous rangers. Here technology is utilized as a means to defend land and water rather than exploit. Whoever will have the power to guide will carve the future of technology. For Indigenous Peoples, this can either remain one of the paths of continued violation, or it may turn into a tool for the preservation of languages, culture, and lands. Technology, if ever to be a force for justice, must center on Indigenous sovereignty, respect consent, and follow the leadership of Indigenous organizations. The Indigenous communities have been leading for the last many years, and now it is up to the world: will it respect and join them, or will it continue to keep colonial modes alive in digital forms? These are important questions that will help think of the future and how it would look like for indigenous peoples and AI will impact them. References:
|
RSS Feed