|
By Sequoia Wells
Sequoia Wells is a writer and student researcher focused on the intersections of technology, environmental justice, and Indigenous rights. Artificial Intelligence (AI) is rapidly shaping the world we live in—transforming communication, education, and environmental monitoring. For Indigenous communities, however, the growth of AI presents a double-edged sword. While it holds potential for language revitalization and cultural preservation, AI also poses serious threats to Indigenous sovereignty, data rights, and environmental justice. Historically, Indigenous communities have been excluded from conversations about the technologies that impact them. Today, AI systems are being trained on unregulated and culturally insensitive data—extracted without consent or context. This mirrors colonial practices of knowledge exploitation, where sacred traditions and languages are taken, commercialized, and used without returning benefits to the communities they originate from. One alarming example involves the Lakota Language Consortium, a nonprofit that entered the Rosebud Sioux Tribe with promises of collaboration. However, after gathering recordings and resources, the organization copyrighted the materials and attempted to sell them back to the community—undermining trust and violating the principle of Indigenous data sovereignty (The Take, 2025). Indigenous educator and robotics designer Danielle Boyer highlights how governments and corporations want access to Indigenous knowledge—but not to connect it to Indigenous people themselves. Her work creates culturally responsive robotics programs for Indigenous youth, designed with intention, accuracy, and community permission. Unlike commercial Large Language Models (LLMs) like ChatGPT, her systems are built to connect, not extract. “What’s the point of language if not to connect people?” she asks. AI trained on scraped internet data cannot replicate the cultural nuance, context, and meaning embedded in Indigenous languages and traditions (Boyer, 2025). Beyond cultural erasure, AI’s environmental footprint also harms Indigenous land. Training large-scale AI models consumes vast amounts of energy, minerals, and water—often sourced from Indigenous territories without consent. According to the University of Bonn’s Institute for Science and Ethics, we must differentiate between “AI for sustainability” and the “sustainability of AI.” While AI can support climate solutions, it must not do so at the cost of the communities already leading environmental stewardship (van Wynsberghe, 2024). AI also risks reinforcing stereotypes. A 2025 article by the Navajo Times revealed that image generators produced harmful, inaccurate depictions of Navajo people—such as “mystical smoke” prayer circles and struggling students—highlighting the unchecked bias embedded in AI training data. As Navajo-Hopi filmmaker and scholar Angelo Baca emphasizes, Indigenous people must be able to protect their stories, images, and voices. The Indian Arts and Crafts Act of 1990 was designed to prevent cultural misappropriation in commerce, but it does not yet extend to the AI space (Baca, 2025). So where do we go from here? The path forward requires AI systems that respect Indigenous rights, governance, and cultural integrity. That means enforcing meaningful consent, honoring data sovereignty, and investing in community-led innovation. Indigenous people must be active participants—not passive data sources—in designing the future of technology. AI is not inherently bad. But when developed without ethical guidance, it becomes a modern tool for age-old systems of extraction and exclusion. We must ask ourselves not just what AI can do, but who it should serve—and on whose terms. References (APA 7th edition)
0 Comments
Leave a Reply. |
RSS Feed