|
By: Morgan Watters
Everyday the prevalence of artificial intelligence (AI) in our everyday lives grows stronger. Along with this comes a rightfully troubling dialogue about the climate impacts of AI. Chances are you’re familiar with the concerns this technology sparks for our environment such as the large amount of fresh water used by data centers. It seems as though everyday facts are reposted on instagram telling us of the horrifying quantities of fresh water that we waste every time we ask simple questions to chat bots. While these numbers are concerning to read and may spark feelings of guilt they often fade to the backs of our minds as we struggle to grasp the reality of such water waste, far away from our comfortable and seemingly unaffected lives. This is the effect of two things. The first is a large hole in the conversation about AI’s water waste regarding who will actually experience the impacts. Much of the time this strain falls onto already burdened communities such as Indigenous people whose rights are frequently ignored in the face of technological development, an all too common form of racial capitalism. The second reason is a stark lack of something called geographical load balancing, a strategy in which the negative impacts of disruptive technology are shared evenly over the geographical regions of users. Currently many AI data centers are built in arid, dry areas, often near to or atop Indigenous land, despite the fact that this positions highly water dependent infrastructure on already drought-stressed land. The distribution of the environmental costs of AI parallels historical practices of settler colonialism and racial capitalism (Kak & West, 2023). In response to environmental concerns many initiatives are searching for strategies to increase the efficiency of these data centers however their ideas have major flaws. As stated in Harvard Business Review these strategies frequently focus on “easily measurable environmental metrics such as the total amount of carbon emissions and water consumption” and don’t sufficiently consider how AI’s environmental costs can be “equitably distributed across different regions and communities” (Ren & Weirman, 2024). Additionally the proposed solution of increasing data center efficiency to combat environmental impacts is challenged by something called Jevon’s Paradox. Jevon’s Paradox is a cycle of attempted increase in efficiency that has repeated over and over throughout history and ultimately is counter productive (Indigenous-led AI and data sovereignty in space, 2025). It works like this: Developers attempt to reduce resource extraction by increasing efficiency of data centers, the successful increase in efficiency significantly lowers price of technology. Increased accessibility causes demand for technology to skyrocket and creates a need to build more data centers. Increase in data centers increases extraction even further. Jevon’s Paradox seems to stump any ideas to decrease AI’s environmental impacts however, Indigenous researchers have other ideas. Star Nations is an Indigenous led initiative founded to support First Nations, Métis, and Inuit communities on the principle that data is kin, not oil. This ethos pushes back on colonial resource extraction from Indigenous people such as oil and insists that this pattern will not be repeated in a new form with Indigenous data. The initiative was created in response to the observed harm that AI is having on Indigenous land and the earth as a whole and the foresight that these problems will increase. Star Nations proposes an alternative path in which AI’s harm to our planet is completely avoided by moving this technology to space. This infrastructure would be an Indigenous led orbital AI system to reject big tech, protect data as a sacred trust and offer technology for collective benefit not shareholder profit. The project is currently in its capital raising phase and calling for investors to seize the opportunity to join. The roadmap calls for immediate action to develop prototypes and scale to global presence (Star Nations, n.d.). The development of AI is causing environmental harm to Indigenous people, violations of data sovereignty and enforcement of harmful stereotypes and cultural erasure. Initiatives like Star Nations prove that despite all this Indigenous leaders still hold innovation and knowledge to protect our planet, the same as they always have. Star Nations is the first Indigenous led AI Orbital-center to propose data sovereignty and Indigenous ownership. As stated in their ethos, “AI is for space because water is for life not machines”. If we listen to the original stewards of the land that AI feeds off of, we still have a chance to save the water for our grandchildren. SOURCES:
0 Comments
By: Marvin Nguyen
After I took time to read and learn about the “AI & Indigenous Peoples” source through the Earth Daughters website, I could understand that artificial intelligence is clearly continuing to develop rapidly in today's world. Honestly, I’d like to say that AI is totally not simply a basic technical tool. But actually it feels concerning that if artificial intelligence develops without the consent of Indigenous communities, so this also leads to intangible consequences regarding respect for rights or sovereignty and the fundamental core values of indigenous communities. In the next argument, I want to talk about the risks of AI to Indigenous people, specifically when data is collected and used without proper consent. Examples include the widespread exploitation of Indigenous knowledge and cultural and environmental information from Indigenous lands as readily available data through AI. Aware of this, Indigenous peoples clearly lose fundamental control over the sharing and interpretation of their own knowledge. I think the advancement and development of artificial intelligence contributes to the marginalization of Indigenous voices and perhaps the root cause is not the artificial intelligence itself, but rather the way it is developed within the framework of human technology. Seriously, I also would like to address more about The complicated issues of sovereignty and the rights of Indigenous peoples here. So yes, we are all know that sovereignty is not simply about inanimate land, but it also symbolizes the rights of a people and how they exercise those rights on that land. Therefore, the fact that artificial intelligence exploits data from these lands, which it considers inanimate and without the voluntary consent of the Indigenous people, I would say that it was deeply concerning. These knowledge models are then manipulated and disseminated indiscriminately, a situation that communities have faced for generations. Where are their rights in all of this? It is seriously worrying when the rights of a knowledge based civilization are disregarded. Indigenous communities are clearly not simply a readily available source of data. And definitely, they have the right to decide whether their knowledge base should be used in the ever-evolving landscape of artificial intelligence technology. In my viewpoint from the Earth Daughters sources, which I focused on in this discussion, emphasizes that artificial intelligence still has other avenues to develop in a more appropriate direction. So I notice that by incorporating the expertise of indigenous people in data collection, such as strengthening collaborative approaches to obtain specific guidance on core values or leadership from indigenous communities, AI can develop beneficial applications such as preserving cultural knowledge, protecting the environment and rights, also about enhancing community resilience by leveraging positive capabilities. Then I want to emphasize that indigenous people should be supported and empowered to be more proactive in protecting their rights, understanding who is leading the development of technology and recognizing that artificial intelligence exists to be embraced, actively manage and utilized by them. It is clear that Artificial intelligence can follow a more appropriate path to harmonize its development and collaborate with the fundamental rights of indigenous people, which is totally the right approach. To conclude this reflective essay of mine, I realize that artificial intelligence requires high ethical standards to be honest. Also, I think that addressing and respecting the legitimate rights of indigenous people is crucial. And I am concerned that if AI continues to develop without meaningful consent, then it will create further obscure Indigenous rights. Future artificial intelligence must be shaped in accordance with the core values of indigenous people, from that must completely abandon harmful and imposing practices. By: Kowsar Hakar
Artificial intelligence (AI) is rapidly reshaping many aspects of society, yet its effects on Indigenous peoples are often overlooked. AI systems can reflect biases embedded in the data they are trained on, which frequently excludes or misrepresents Indigenous voices and knowledge. Without careful attention, this can reinforce harmful stereotypes and widen digital divides, further marginalizing Indigenous communities (United Nations). At the same time, when AI is developed inclusively and ethically, it holds real promise for supporting Indigenous cultures and languages. For example, AI-driven tools can help document and revitalize endangered Indigenous languages that otherwise risk disappearing entirely. These tools can analyze voices, create searchable digital archives, and support educational resources that help younger generations learn and keep traditional languages alive (United Nations). Yet the use of AI in Indigenous cultural contexts also raises serious concerns about cultural appropriation and representation. Recent AI-generated social media personas have imitated Indigenous identities without community involvement or consent, sparking criticism for cultural misuse and “digital blackface” (Live Science). Such examples reveal how AI can misrepresent Indigenous cultures if not guided by Indigenous leadership. To ensure AI supports Indigenous rights, it must be shaped by principles of data sovereignty, cultural respect, and community governance. Reports from UNESCO emphasize guidelines for Indigenous data control so that communities can decide how their cultural and linguistic information is used in AI systems. Such frameworks aim to prevent exploitation or misappropriation and promote ethical collaboration (UNESCO). When Indigenous communities are actively involved in AI development, technology becomes a tool for empowerment rather than a vehicle of colonial patterns of exclusion. Ultimately, AI designed with Indigenous values and leadership can contribute to cultural resilience and help shape futures in ways that honor Indigenous knowledge and self-determination. References
By: Dinh Thai Bao Tran
Artificial intelligence (AI) is changing the way people live, work, and connect with each other. From translation apps to environmental monitoring, AI has become a big part of global development. However, for Indigenous communities, this new technology brings both dangers and possibilities. AI can help preserve languages and support community empowerment, but it can also repeat old injustices, take away cultural knowledge, and threaten sovereignty. Understanding both sides of AI is important if we want a digital future that is fair and respectful of Indigenous knowledge. AI systems are trained with large amounts of data that mostly reflect Western values and worldviews. Because of that, many AI programs have built-in bias that can misrepresent or harm Indigenous people. For example, facial recognition tools often make more mistakes identifying people of color and Indigenous people than white users (Buolamwini & Gebru, 2018). These mistakes are not just “technical errors” they show how power is uneven in data collection and design. AI can also lead to cultural appropriation and data misuse. Many Indigenous songs, images, and languages are taken from the internet to train AI tools, often without permission or credit. This is a kind of digital colonialism; where what is being taken is not land but knowledge and culture. As the Indigenous AI Working Group (2020) warns, when AI systems are trained on Indigenous data without community control, it continues “a form of knowledge violence.” Another serious concern is surveillance. In the name of “security” or “environmental protection,” governments and corporations have used AI to watch Indigenous activists. During pipeline protests in North America, police used drones and data analytics to track Indigenous demonstrators These actions damage trust and repeat colonial control only now through technology. Even with these risks, many Indigenous communities are using AI in creative and positive ways to protect their cultures and rights. A clear example is language revitalization. All around the world, AI programs are helping record, translate, and teach endangered Indigenous languages. AI can also help monitor and protect the environment, which is deeply connected to Indigenous sovereignty. In the Amazon, the Waorani community has used AI-based mapping tools to find illegal logging, combining traditional knowledge with modern science. These projects show that technology does not have to be against Indigenous culture it can strengthen their role in protecting the Earth. AI can also help reduce the digital divide, bringing education, healthcare, and information that fit each community’s culture. For example, AI translation tools can help Indigenous students learn in their own languages, and medical AI systems can adapt to their specific health needs. When technology is guided by ethics and cooperation, AI can become a tool of empowerment, not oppression. At the heart of this issue is the idea of Indigenous Data Sovereignty the right of Indigenous Peoples to control how their data is collected, owned, and used. Respecting this right means:
As Indigenous technologist Dr. Jason Edward Lewis said, “Some knowledge is sacred — not everything that can be coded should be coded.” (Lewis, 2020). True innovation only happens when humans build technology with humility, gratitude, and respect for its limits. AI itself is neither good nor bad — it reflects the values and goals of the people who create it. For Indigenous communities, the challenge is not only to resist AI but to re-imagine it in fairer, more inclusive ways. By claiming control over their data, setting cultural protocols, and designing systems that reflect Indigenous worldviews, communities can make AI a tool for justice rather than exploitation. The future of AI and Indigenous rights depends on relationships between people, between knowledge systems, and between technology and the Earth. AI can be used to exploit, but it can also be used to protect and connect. If guided by respect, collaboration, and environmental awareness, AI can heal instead of harm. The question is not how AI will shape Indigenous futures, but how Indigenous values can shape the future of AI. SOURCES:
By: Owen Jenkins. Owen's personal beliefs about human rights extends to an evolving understanding of indigenous rights, the past and present ways in which our nation has failed indigenous communities, and the ways in which we can pave a way forward that is equitable and just.
With the rise of Artificial Intelligence (AI), there has been increasing concern about the integrity and reliability of AI, and the accuracy of the information that emerges from the pool of data from which AI generates the information that is then passed on to users. When someone submits a prompt, it takes information from a vast amount of varied and sometimes conflicting inputs across the internet, with some corners of the web being filled with racist and sexist articles and ideologies. When users get their information from AI generated prompts, they are fed what they perceive to be convincing evidence to support their claim. Individuals that post their opinion, whether factual or not, onto online platforms have an influence on AI’s understanding and resulting output. This leaves open the potential for misinformation to spread like wildfire across the web, influencing AI’s understanding of the world that we live in today. While the terms of service for many social media platforms allow for posts to be flagged for misinformation, there’s a vast amount of information being shared that will go undetected. AI technology can therefore be used to harm Indigenous culture, given how, for tens of decades, Indigenous People have been poorly represented in movies and media, to name a few, and this failure to accurately portray and represent indigenous culture and history will have an affect on how AI then perceives native people and culture, and more easily pass on misinformation to the public through AI inquiry. In an AI generation study, a man by the name of Sean Dudley had been generating how AI views what it thinks a Native American town looks like. When generating a Southwest American town, it showed a modern looking town 70 miles from Tuba City, which is the biggest social group in the Navajo Nation, with modern cars and neatly kept roadways. However, when they put in the same prompt with Tuba City, the AI image that was generated looked dilapidated, with junky old cars, dirt roads, and sheds for houses, compared to the more modern and thriving look for first prompt. This study shows how severely biased opinions about native culture can sneak into AI prompts. In reality, Tuba City is a mixture of beautiful landscapes, historic structures, and small town houses. Nothing like the sad image that AI generated for this study. This kind of misinformation can alter the way that we come to understand an entire group of people and culture (Dudley et al, 2024). AI, however, is not always a bad tool. In other ways AI supports the proliferation of information that can be useful for enhancing our understanding of indigenous culture. For example, it can be used to keep Indigenous languages alive. This would mean a lot for linguistic revival for languages that have become endangered. In an analysis by Dr. Jared Coleman, a translation tool developer at Loyola Marymount University, he says “For endangered, no-resource languages, creating translators is challenging, and accuracy is even more critical … for this reason, our goal isn’t to produce perfect translations but to generate accurate ones that closely capture the user’s intended meaning.” The other side of that coin, however, is closely related to the previously stated concern that AI may not accurately present the language and interpretations as truth, in the event that misinformation is spread consistently across the internet (Jiang, 2025). The world is changing quickly and with it our use of technology. As with our past, we are accountable for how we remember and understand each other in this world. This responsibility rests on the shoulders of each individual, and therefore we must be cognizant of the ways in which we rely on and trust in the technologies that shape our understanding of the world. References
By: Asli Gutierrez
Indigenous languages are central to the identity of Indigenous people, the preservation of their cultures, worldviews and visions and an expression of self-determination. But many of these languages are at risk of disappearing: by 2050 around 20 Indigenous languages are going to be left, specifically in the United States. This raises an important question: could Artificial Intelligence (AI) help to keep indigenous languages from extinction? While AI offers hope by saving these languages, there are also risks for these Indigenous communities, especially their languages. This article will highlight both; opportunities and risks of AI in preserving Indigenous languages. While AI has great potential to preserve Indigenous languages, there are significant risks for Indigenous communities. First, it could harm Indigenous communities by misunderstanding languages, and undermining cultural knowledge, because of how AI is trained on incomplete or inaccurate datasets. As a result, AI can produce translations, or interpretations that are wrong and misleading. This contributes to spreading misinformations, and undermining efforts to preserve and respect Indigenous knowledge, and linguistic diversity. As Danielle Bayer explains on the podcast Can AI save endangered Indigenous languages? "Even if it's 100% correct, I still worry that it might lack important context." Second, AI cannot replace human’s ability to communicate. Indigenous languages are traditions that are alive, meant to be spoken and shared. Learning from elders or other community members is essential to keep the languages alive and meaningful. Without human connection, AI might turn living languages into something artificial. Words without context or culture that give them life. Despite the risks, AI can be a powerful ally for Indigenous communities to keep their languages alive. First, AI can be used to record and instruct languages, just like Walking in Two Worlds explains how First Nations computer programmers are developing AI technology explicitly to assist in preserving endangered languages, having them recorded and accessible for future generations. Similarly, Danielle Boyer's Scobot robot engages with children in Nishinaabe, speaking in children's language. By making learning languages interactive, AI can help preserve these living traditions. Secondly, AI can enable community-led and moral language conservation strategies. UNESCO emphasizes Indigenous agency over AI technologies and data to make sure communities have power over the use of their languages and their propagation. Doing this prevents exploitation, respects cultural knowledge, and ensures AI will strengthen rather than harm Indigenous sovereignty. Finally, AI can be employed in reaching the younger generations and making language learning enjoyable. Peter-Lucas Jones notes that AI can help engage youth in bonding with their heritage while learning, exposing more people to learning languages and making it enjoyable. Through apps like Scobot, children not only learn their language but also interact with technology that represents their culture, custom, and community identity. By embracing innovation with virtue, AI can be a great ally in maintaining Indigenous languages. In conclusion, AI is a tool for Indigenous languages with its risks. It can take human connection that makes Indigenous languages feel alive, when if it is being used correctly and responsibly; it can help to preserve languages, and support communities to keep their culture strong. The main point to make it work is to make sure Indigenous communities are the ones guiding how AI is used. Projects like Danielle Boyer’s Scobot show that, with care and community involvement, AI can help to keep languages alive and meaningful for next generations. References
Risks of AI for Indigenous Peoples (bias, surveillance, cultural appropriation, data exploitation).11/2/2025 By: Arbay Abdulahi
I am a contributor to the earth daughters’ campaign, and I believe that artificial intelligence that is not looked over can not be a futuristic tool for progress, and I believe that it is a threat of digital colonialization that will be or could be something that erases the identity of indigenous peoples, and their knowledge. As of right now Ai has been everywhere, AI has so many negative effects, for example, it can have misinformation and people use it for data entry, and to manage systems. And for indigenous people AI brings a threat because right now AI is acting like a new colonial power by taking what’s not theirs and making it into something different. This is not only for technical issues but it’s also a fight for human rights and environmental justice. I would say that one of the biggest problems with AI is that it gets all its information from data collecting. Artificial intelligence leaves out and wrongfully judges indigenous communities, AI is biased and when it is used by the government or agencies it creates real harm. For example, in the article “Artificial intelligence and indigenous people’s realities” by Nina Sangma she says, “For indigenous peoples the dawning age of Artificial intelligence threatens to exacerbate existing inequalities and violence.” This proves that AI is a threat and that Artificial intelligence doesn’t just create new problems, but it makes the problems that already exist even worse because it works very fast and on a large scale. The systems that control artificial intelligence are risky for indigenous people due to high-risk surveillance. For example, facial recognition is trained and used by data that poorly represents indigenous people, they are wrongfully identified, falsely accused, and unfairly targeted. The use of AI tools makes existing discrimination a lot worse by letting authorities have ways to watch and control marginalized communities. In the article “Biased Technology: The Automated Discrimination of Facial Recognition” by Rachel Fergus, it states that “Law enforcement and the criminal justice system already disproportionately target and incarcerate people of color. Using technology that has documented problems with correctly identifying people of color is dangerous.” And this proves that indigenous people face a double burden of risk and that adding an AI technology like face recognition that is not always correct is dangerous. In conclusion biased AI and the digital theft of culture are major new dangers to indigenous peoples. AI had been actively hurting indigenous rights and damages their unique cultural integrity by using their information without permission. For the earth’s daughters’ campaign their land also means that we must fight for fairness in technology. We need to stand up for indigenous sovereignty and make sure that AI stops trying too digitally takeover everything and in order to be able to do that I believe that we must all work together to figure out how we can take control to manage this situation. References
By: Ali Abdiaziz Artificial Intelligence (AI) is changing how we live—from communication and education to environmental protection. But for Indigenous Peoples, AI is a double-edged sword. It can be used to continue harmful colonial practices, or it can support language, land, and cultural preservation. The impact depends on who is in control, and whether Indigenous rights are respected. The Risk: Digital Colonialism and Bias One major issue is data colonialism. Many AI systems are trained using data taken from the internet, archives, and public sources—often including Indigenous stories, images, languages, and land information—without consent. This is a digital form of exploitation, where outsiders use Indigenous knowledge for research or profit, while communities get little say or benefit. Facial recognition is another example. A 2018 study by Buolamwini and Gebru found that these systems often misidentify people of color. For Indigenous people, who already face over-policing in countries like Canada and Australia, this technology can increase the risk of surveillance and criminalization. AI is also used in environmental mapping and planning, often by companies looking to extract resources. When these tools ignore Indigenous land rights or traditional knowledge, it can lead to displacement and environmental damage (Mohawk & Smith, 2021). These technologies, though advanced, still carry the same old patterns of exclusion. The Opportunity: Language and Land Protection Despite these risks, Indigenous communities are using AI in powerful ways. Language revitalization is one key area. Many Indigenous languages are endangered, but AI can help document and teach them. For example, the First Voices project in British Columbia uses digital tools to preserve and share Indigenous languages online. These efforts are led by communities and help keep cultural identity strong for future generations (First Peoples’ Cultural Council, 2022). AI is also helping protect land. In the Amazon, Indigenous groups are using satellite images and machine learning to monitor illegal logging in real time (BenYishay et al., 2021). This combination of tech and traditional land knowledge shows how AI can support environmental justice when used ethically. The Solution: Indigenous Data Sovereignty To make AI work for Indigenous Peoples, their data rights must be respected. This is known as Indigenous data sovereignty—the right of Indigenous nations to control their data. The CARE Principles were created to support this: Collective Benefit Authority to Control Responsibility Ethics (Carroll et al., 2020) These principles remind researchers and developers to prioritize Indigenous leadership, consent, and values when working with data or technology. A Shared Future AI is not neutral. It reflects the priorities of the people who create it. If Indigenous Peoples are included—and respected—AI can become a tool for healing and empowerment. The future of technology must be built with Indigenous voices at the center, not the margins. References
Bio: Ali Abdiaziz is a student passionate about Art, technology, and Indigenous rights.
By Sequoia Wells
Sequoia Wells is a writer and student researcher focused on the intersections of technology, environmental justice, and Indigenous rights. Artificial Intelligence (AI) is rapidly shaping the world we live in—transforming communication, education, and environmental monitoring. For Indigenous communities, however, the growth of AI presents a double-edged sword. While it holds potential for language revitalization and cultural preservation, AI also poses serious threats to Indigenous sovereignty, data rights, and environmental justice. Historically, Indigenous communities have been excluded from conversations about the technologies that impact them. Today, AI systems are being trained on unregulated and culturally insensitive data—extracted without consent or context. This mirrors colonial practices of knowledge exploitation, where sacred traditions and languages are taken, commercialized, and used without returning benefits to the communities they originate from. One alarming example involves the Lakota Language Consortium, a nonprofit that entered the Rosebud Sioux Tribe with promises of collaboration. However, after gathering recordings and resources, the organization copyrighted the materials and attempted to sell them back to the community—undermining trust and violating the principle of Indigenous data sovereignty (The Take, 2025). Indigenous educator and robotics designer Danielle Boyer highlights how governments and corporations want access to Indigenous knowledge—but not to connect it to Indigenous people themselves. Her work creates culturally responsive robotics programs for Indigenous youth, designed with intention, accuracy, and community permission. Unlike commercial Large Language Models (LLMs) like ChatGPT, her systems are built to connect, not extract. “What’s the point of language if not to connect people?” she asks. AI trained on scraped internet data cannot replicate the cultural nuance, context, and meaning embedded in Indigenous languages and traditions (Boyer, 2025). Beyond cultural erasure, AI’s environmental footprint also harms Indigenous land. Training large-scale AI models consumes vast amounts of energy, minerals, and water—often sourced from Indigenous territories without consent. According to the University of Bonn’s Institute for Science and Ethics, we must differentiate between “AI for sustainability” and the “sustainability of AI.” While AI can support climate solutions, it must not do so at the cost of the communities already leading environmental stewardship (van Wynsberghe, 2024). AI also risks reinforcing stereotypes. A 2025 article by the Navajo Times revealed that image generators produced harmful, inaccurate depictions of Navajo people—such as “mystical smoke” prayer circles and struggling students—highlighting the unchecked bias embedded in AI training data. As Navajo-Hopi filmmaker and scholar Angelo Baca emphasizes, Indigenous people must be able to protect their stories, images, and voices. The Indian Arts and Crafts Act of 1990 was designed to prevent cultural misappropriation in commerce, but it does not yet extend to the AI space (Baca, 2025). So where do we go from here? The path forward requires AI systems that respect Indigenous rights, governance, and cultural integrity. That means enforcing meaningful consent, honoring data sovereignty, and investing in community-led innovation. Indigenous people must be active participants—not passive data sources—in designing the future of technology. AI is not inherently bad. But when developed without ethical guidance, it becomes a modern tool for age-old systems of extraction and exclusion. We must ask ourselves not just what AI can do, but who it should serve—and on whose terms. References (APA 7th edition)
By: Ismahan Salat
I’m Ismahan Salat from Burien, WA. I care about environmental justice and supporting Indigenous communities in protecting their culture, land, and rights. AI is everywhere. It’s supposed to make life easier but for Indigenous communities it’s tricky. It can either help protect culture and land or mess things up even more. The difference comes down to who’s in control and whose voices actually matter. AI can seriously mess things up if it’s biased or used without permission. Most AI is trained on data that ignores Indigenous languages, stories, and histories. That means it can erase perspectives, spread stereotypes, or just get things straight-up wrong. Facial recognition technology for example misidentifies people of color way more than white people which could put Indigenous activists at risk of surveillance (Buolamwini & Gebru, 2018). Cultural appropriation is another problem. AI scrapes songs, art, and stories online without asking. That’s knowledge stolen out of context and it ignores Indigenous rights. The Indigenous AI position paper calls this “digital colonialism” because it’s basically repeating old patterns of taking what isn’t yours (Lewis et al., 2020). But AI isn’t all bad. If done right it can actually help. One of the coolest things is saving Indigenous languages. Many are disappearing but AI tools can help preserve them. Canada’s FirstVoices platform uses AI to make dictionaries, learning tools, and archives all controlled by Indigenous language experts (First Peoples’ Cultural Council, 2023). AI can also help protect the environment. Indigenous nations are often fighting to defend land from climate change and resource extraction. AI can track deforestation, water pollution, and animal migration giving communities proof to protect their territories and traditions. The real key is who controls the data. Indigenous data sovereignty means communities decide how their information is used. That way AI works for them not against them. The Global Indigenous Data Alliance created the CARE Principles Collective benefit, Authority to control, Responsibility, and Ethics to guide AI in a way that actually respects communities (GIDA, 2019). AI isn’t neutral. It reflects the people who make it. For Indigenous communities it can either repeat harm or create new opportunities to protect culture, land, and language. The future depends on centering Indigenous voices, giving communities control, and building AI that actually respects their worldviews. If we do it right AI can help Indigenous Peoples survive and thrive instead of being another way to erase them. References
|
RSS Feed