Saturday, February 28, 2026

Film Screening - Humans in the Loop

 

Film Screening - Humans in the Loop




Hello Learners. I'm a student. I'm writing this blog as a part of screening activity. so, this task is based on the Humans in the Loop. so this task is assigned by Dilip sir Barad.




Pre-Viewing Task


1. AI Bias and Indigenous Knowledge Systems

AI bias refers to systematic distortions in machine learning systems that arise because the data used to train them reflects existing social inequalities, cultural prejudices, and dominant worldviews. Since AI models depend on human-created and human-labelled datasets, they inevitably absorb the assumptions, priorities, and perspectives of the people and institutions that design and supervise them. As a result, these systems often privilege dominant cultural, racial, and economic frameworks while marginalising alternative knowledge traditions.

The film Humans in the Loop powerfully illustrates this issue through the character of Nehma, an Oraon Adivasi woman employed to label ecological images for an AI system. She is instructed to classify plants, insects, and animals using industrial and commercial categories such as “pest,” “weed,” or “crop.” However, these labels clash with her Indigenous knowledge system, which understands these life forms not as isolated economic units but as interconnected beings within a relational ecosystem.

The film highlights that terms like “pest” and “weed” are not neutral scientific truths; they are functional labels shaped by economic priorities. A plant considered a weed in industrial agriculture may hold medicinal, nutritional, or ecological value within an Indigenous context. This raises an important question: will an industrial, consumption-driven economy ultimately control how knowledge itself is defined and structured?

Indigenous Ecological Knowledge (IEK) is holistic, community-based, and grounded in lived experience. It resists the rigid, binary classifications that machine learning systems require in order to function. In doing so, the film reveals how the very structure of AI tends to favour extractive and capitalist epistemologies over relational and sustainable ones.

A striking moment in the film occurs when an Adivasi child asks an image generator to show him riding a crocodile, but the system instead produces an image of a white boy riding an alligator. This scene symbolically demonstrates how AI encodes certain identities as the “default,” while others remain invisible. It forces viewers to confront a crucial question: who determines the “ground truth” within AI systems, and whose knowledge and realities are excluded in the process?



2. Labour and Digital Economies

Invisible labour in digital economies refers to the hidden, underpaid, and often gendered human work that sustains technologies presented as automated or “intelligent.” Tasks such as data annotation, content moderation, and image classification are performed by real people, even though these processes are marketed as products of seamless artificial intelligence.

The phrase “artificial intelligence” itself can be misleading because it conceals the massive human workforce that enables these systems to function. In India alone, more than 70,000 workersmany of them rural women contribute to training AI by carrying out repetitive cognitive tasks. Their efforts form the backbone of global AI development, yet their presence remains largely unacknowledged.

This labour is invisible in several interconnected ways. It is geographically distant from the corporations that profit from it, socially marginalised due to class, caste, and gender hierarchies, and algorithmically erased from the final AI product. Public narratives about technological advancement rarely acknowledge these workers, reinforcing the illusion that AI operates independently of human intervention.

The film Humans in the Loop carefully depicts the lived realities of this work environment: rooms lit by harsh fluorescent lights, slow computers, supervisors pressuring employees to meet strict targets, and workers attempting to interpret labels that reflect unfamiliar cultural or industrial vocabularies. These details draw attention to the physical and emotional texture of digital labour.

By foregrounding this hidden workforce, the film challenges the myth that AI is neutral, objective, or detached from social structures. Instead, it exposes global supply chains that echo older forms of colonial and caste-based extraction, where marginalised communities provide raw material now in the form of cognitive labour for powerful institutions.

The issue also raises important ethical questions about fair wages, intellectual contribution, and recognition. Who benefits financially from AI systems, and who remains unseen despite shaping their outcomes? When cognitive labour is commodified without proper acknowledgment, technological progress becomes deeply unequal.

Ultimately, the film insists that the story of AI must include the workers who power it. Their labour should not remain on the margins of technological narratives but must be placed at the centre of how we understand digital modernity.


3. Politics of Representation

In Humans in the Loop, representation functions at two closely connected levels. First, the film examines how AI systems represent or fail to represent Adivasi communities. Second, it reflects on how the film itself portrays both technology and Adivasi culture to a wider, mainstream audience. In this way, representation becomes both the subject of critique and the medium through which that critique is delivered.

Many critics and promotional discussions highlight the film’s distinctive perspective: instead of celebrating technological progress, it questions how such progress can deepen exclusion and marginalise Indigenous knowledge systems. Importantly, Adivasi experience is not treated as background scenery. Rather, it becomes the interpretive lens through which artificial intelligence is examined and challenged.

One of the most discussed scenes involves an AI image generator producing a stereotypical, Europeanised image when prompted to depict a tribal woman. This moment directly exposes the politics embedded within training data. The system does not truly “see” Adivasi identities; instead, it reproduces colonial-era visual assumptions. The distortion is not accidental it reflects whose images and histories dominate the data used to train such systems.

Executive producer Kiran Rao has framed the film’s intervention around ideas such as equitability, representation, and “data colonialism.” This framing makes clear that representation here is not simply about visual accuracy or aesthetic choice. It is a structural and political issue tied to power, access, and control over knowledge production.

At the same time, responses to the film’s depiction of Adivasi culture are not unanimous. Some reviewers appreciate its grounded and research-based portrayal, praising its sensitivity and authenticity. Others, including at least one critic on Letterboxd, argue that the film risks “fetishising” Adivasi life for liberal, festival-oriented audiences without fully committing to deeper political change. These differing responses show that representation itself remains a contested terrain.

The film’s bilingual use of Hindi and Kurukh is also significant. By including Kurukh an Adivasi language rarely heard in mainstream Indian cinema the film performs an act of representational recognition. Language becomes a means of asserting presence and resisting erasure.

Taken together, the film urges viewers to recognise a double danger: AI systems can misrepresent or erase Adivasi communities, and cinema, if not self-critical, can reproduce similar distortions. The responsibility of representation, therefore, lies not only with technology but also with storytelling itself.


While-Watching Task


1. Narrative & Storytelling

In Humans in the Loop, Nehma’s personal life is carefully woven into the larger algorithmic systems she works within. The film does not treat AI as something distant or abstract; instead, it shows how global technological infrastructures enter directly into domestic and intimate spaces. Nehma’s job as a data annotator becomes part of her daily routine performed alongside household chores, caregiving, and family conversations. By placing digital labour within the home, the narrative reveals how international AI networks depend on local, often economically vulnerable workers whose contributions remain largely invisible.

The film uses specific narrative turns to foreground labour, family, and knowledge systems. One important strand is the portrayal of Nehma balancing annotation work with domestic responsibilities. These scenes underline the gendered nature of digital labour: the boundaries between paid work and unpaid care work collapse, showing how women’s labour is layered and continuous. The fluorescent-lit workspace and the home environment merge symbolically, suggesting that algorithmic systems quietly reorganise everyday life.

Another significant narrative turn occurs when Nehma is required to categorise plants, insects, and animals using industrial labels such as “pest” or “weed.” These moments bring her Oraon Indigenous knowledge system into direct tension with algorithmic logic. What the AI framework defines as a “weed” may hold medicinal, ecological, or cultural significance in her lived experience. Through this conflict, the film demonstrates that annotation is not a neutral technical activity it demands interpretation and forces workers to translate their own knowledge into categories shaped by corporate and industrial priorities.

By linking these strands domestic life, economic necessity, and epistemological conflict the film positions Nehma not merely as an employee within a data pipeline, but as a subject negotiating between two worlds. Her story becomes a lens through which viewers understand how algorithmic systems restructure labour, reshape family dynamics, and challenge Indigenous ways of knowing.

Another significant narrative development in Humans in the Loop unfolds through Nehma’s interactions with her family and community. These conversations situate her labour within a shared social world rather than framing it as merely individual employment. Her work decisions, frustrations, and ethical dilemmas are shown to affect and be shaped by collective life. In doing so, the film resists the neoliberal idea of the isolated worker and instead presents labour as embedded within kinship, responsibility, and community relationships.

The film also relies on striking visual contrasts to deepen this narrative tension. Shots of forests, rivers, and village landscapes are juxtaposed with the flat, standardised digital interfaces of annotation software. This contrast visually dramatises the distance between lived ecological knowledge rich, sensory, relational and the rigid, machine-readable categories demanded by algorithmic systems. Nature appears dynamic and interconnected, while the digital screen reduces it to selectable labels and binary choices.

Through these narrative and visual strategies, the film makes clear that algorithmic systems do not remain confined to corporate offices or distant servers. They enter homes, shape conversations, and subtly reorganise cultural frameworks. Digital labour is thus revealed not as a detached technological task, but as something deeply interwoven with family life, cultural identity, and Indigenous knowledge systems.


In Humans in the Loop, when Nehma “teaches” AI through data annotation, the film reframes machine learning as something fundamentally human-driven rather than autonomous. Her work makes visible the hidden cycle behind the phrase “machine learning”: AI systems learn only because humans repeatedly supply judgement, correction, and categorisation. What appears as artificial intelligence is, in reality, accumulated human interpretation.

Beyond technical terminology, Neh

Uploading: 197238 of 197238 bytes uploaded.

ma’s role shows that this teaching is not mechanical but interpretative and ethical. Every label she selects involves choices about context, meaning, and relevance. The machine does not encounter the world directly; it encounters a version of the world filtered through her decisions. In this sense, the human–machine loop becomes a process of mediated understanding, where human cognition quietly structures what the AI comes to “know.”

Importantly, what Nehma teaches is shaped by her own cultural and ecological worldview even when she is compelled to translate it into industrial categories. This suggests that human–machine learning loops are also epistemological loops. Certain knowledge systems are reformatted into machine-readable data, while others are simplified, distorted, or excluded entirely. AI does not simply learn information; it inherits structured perspectives, including biases and limitations embedded in human judgement.

Ultimately, the film encourages viewers to understand the human–machine loop as a social and cultural process rather than a purely technical mechanism. If AI learns from humans, then questions of power, recognition, and representation become central: whose knowledge is amplified through these systems, and whose ways of knowing are left outside the loop?


2. Representation & Cultural Context

In Humans in the Loop, Adivasi culture, language, tradition, and ecological knowledge are portrayed with intimacy and restraint rather than through exotic or romanticised imagery. The film embeds these elements within Nehma’s everyday life, allowing viewers to encounter Adivasi identity as a lived, contemporary reality. Her position as an Oraon Adivasi woman is not treated as symbolic decoration but as central to how she understands work, technology, and the natural world.

Cultural traditions appear organically through domestic spaces, family conversations, and community interactions. Rituals, food practices, and shared customs are presented subtly, without dramatic emphasis. This understated approach avoids spectacle and instead highlights continuity how ancestral ways of living coexist alongside modern digital labour. The film suggests that tradition and technological modernity are not mutually exclusive, but exist in tension and negotiation.

Language becomes a key marker of identity and power. Nehma’s use of her Indigenous language in personal and community settings contrasts sharply with the English-dominated interfaces of AI annotation platforms. This linguistic contrast reflects a broader hierarchy within global technological systems, where English operates as the default medium of authority. While Adivasi language carries cultural memory, worldview, and belonging, it remains marginal within the structures of digital capitalism.

Ecological knowledge is perhaps the film’s most profound representation. Nehma’s understanding of plants, insects, and animals is relational and experiential, formed through lived interaction with her environment. The film juxtaposes this holistic worldview with the reductive categories demanded by machine learning systems terms like “weed” or “pest.” In doing so, it reveals the limits of algorithmic clssification and demonstrates how Indigenous ecological knowledge challenges industrial definitions of value and utility.

Overall, the film presents Adivasi identity as resilient, adaptive, and intellectually rich. Cultural memory, language, and ecological wisdom persist even within globalised digital economies. At the same time, the narrative raises critical concerns about recognition and visibility: although Adivasi knowledge contributes to both community survival and technological development, it remains marginalised in dominant narratives of innovation and progress.


Does the film challenge or reinforce dominant media stereotypes about tribal communities and modern technology?

In Humans in the Loop, the depiction of Nehma and her community primarily challenges dominant media stereotypes about tribal (Adivasi) communities and their relationship with modern technology.

Mainstream representations often portray tribal communities in two limiting ways: either as “primitive” and disconnected from technological modernity, or as romanticised figures living in harmony with nature but outside contemporary progress. The film disrupts this binary. Nehma is shown as deeply rooted in her Adivasi cultural identity while simultaneously participating in global digital economies as a data annotator. Her work demonstrates that tribal communities are not technologically absent; rather, they are actively involved in building AI systems though their contributions frequently remain invisible.

The film also resists stereotypes by emphasising intellectual agency. Nehma is not framed as a passive beneficiary of development or as someone merely adapting to change. Instead, she engages critically with her tasks, interprets complex categories, and makes decisions that shape how AI systems “learn.” By highlighting her cognitive and interpretative labour, the film challenges the assumption that technological expertise belongs exclusively to urban, elite, or Western subjects.

At the same time, the narrative does not present an idealised image. It acknowledges structural inequalities precarious wages, surveillance-like supervision, and the tension between Indigenous ecological knowledge and industrial classification systems. By doing so, the film avoids replacing one stereotype with another. Instead, it reveals how power operates within digital economies, particularly in determining whose knowledge is valued and whose remains marginal.

Ultimately, the film presents Adivasi identity as modern, adaptive, and intellectually engaged. It challenges simplistic media narratives and reframes the conversation: the issue is not whether tribal communities participate in technological futures, but whether their participation is recognised, respected, and fairly represented within those futures.


Uploading: 1416174 of 1416174 bytes uploaded.


Mise-en-Scène & Cinematography: Humans in the Loop

Aspect Ratio The Governing Formal Choice

The film’s most striking formal decision is its use of a 1.55:1 near-square aspect ratio. This frame closely resembles the proportions of a computer monitor, subtly aligning the audience’s gaze with the digital interface that structures Nehma’s labour. The choice creates an intimate, almost storybook-like visual field, drawing viewers closer to characters who are often absent from mainstream representation.

Importantly, this framing places the forest and the data centre within the same visual logic. Neither space is given the sweeping cinematic grandeur typically associated with landscape films, nor is the technological environment glamorised through expansive widescreen spectacle. By containing both within the same proportional limits, the film resists privileging nature as pure or technology as dominant. Instead, it visually suggests coexistence and tension within a shared frame.

The Forest

In the forest sequences, wide-angle shots embed characters within their surroundings. Rather than positioning them against nature as isolated figures, the compositions integrate them into the landscape. Humans appear as participants within the ecosystem, not as observers standing apart from it.

Lighting further reinforces this worldview. The natural, dappled light is warm and diffuse, without harsh directional emphasis. No element is dramatically spotlit; instead, the illumination is evenly distributed, suggesting ecological equality. The mise-en-scène reflects a relational ontology everything within the frame shares space and significance.

A particularly meaningful example appears in the porcupine sequence, where the camera is positioned low to the ground. By aligning the human and the animal on the same horizontal plane, the cinematography encodes a non-hierarchical perspective. The visual grammar subtly communicates that neither species dominates the frame; both coexist within a shared environment.

The compositions in these forest scenes avoid strict geometric order. Roots, grass, branches, and canopy lines fracture the image into irregular, organic shapes. The frame feels textured and layered rather than structured by rigid symmetry. This organic visual design stands in sharp contrast to the grid-based logic of digital interfaces, symbolically opposing the fluidity of ecological knowledge to the standardised order of algorithmic systems.

Through these formal strategies, the film’s mise-en-scène transforms landscape into philosophy, visually articulating a worldview grounded in relational balance rather than extraction or control.


The Workspace / Data Centre

In Humans in the Loop, the workspace sequences sharply contrast with the forest, creating a visual and ideological opposition through mise-en-scène and cinematography.

The camera frequently employs tight mid-shots and close-ups, compressing the spatial field. Walls, ceilings, and computer screens remain constantly visible, giving the impression that there is no world beyond the frame. Unlike the forest scenes, where depth extends outward organically, the data centre feels boxed in and contained. The near-square aspect ratio intensifies this enclosure, reinforcing a sense of restriction and surveillance.

Lighting plays a crucial role in shaping the atmosphere. The fluorescent artificial light is flat, shadowless, and institutional. It removes gradation and depth, visually echoing the logic of binary classification that underpins machine learning systems. Just as annotation demands that objects be reduced to fixed categories, the lighting reduces visual complexity everything appears uniformly exposed, either lit or not, leaving little room for ambiguity.

The colour palette further strengthens this contrast. Cool blue-grey tones dominate the workspace, directly opposing the warm ochres and earthy hues of the forest sequences. This chromatic shift encodes an emotional and philosophical divide: the warmth of relational ecology versus the sterility of technological infrastructure.

A particularly telling visual detail is the computer screen functioning as the primary light source. It illuminates Nehma’s face, rather than her illuminating the space around her. Symbolically, the machine casts light onto the human subject, suggesting a reversal of agency. Power appears to flow from screen to body, subtly positioning the worker as shaped by the technological system she serves.

Blocking and composition reinforce this industrial logic. Workers are arranged in rows, facing identical screens, forming an assembly-line image reminiscent of factory production. The repetition of bodies and monitors situates digital annotation within the visual grammar of industrial labour. Though the task is cognitive rather than manual, the mise-en-scène reveals its structural similarity to older forms of mechanised production.

Through these formal choices, the film transforms the workspace into a site of containment, hierarchy, and standardisation visually articulating how digital economies organise both space and human presence.


Ritual and Domestic Spaces

In Humans in the Loop, ritual and domestic spaces are filmed with medium close-ups that feel intimate rather than restrictive. The camera does not confine characters within tight visual boxes; instead, it “holds” them gently, allowing gestures and expressions to unfold naturally. This creates emotional warmth and reinforces the sense of relational continuity within family life.

Texture becomes especially important in these sequences. Close framing highlights tactile surfaces rock, bark, woven fabric, soil. These material details construct a visual argument: Adivasi knowledge is embodied and lived through touch, movement, and environment. It is sensory and grounded, not abstract or easily converted into data. By foregrounding texture, the cinematography implicitly contrasts lived material knowledge with digitised classification.

Eye-lines also carry meaning. In scenes where Nehma interacts with her children, horizontal framing places them at equal visual levels. Knowledge transmission appears dialogic and relational, not hierarchical. This stands in direct contrast to the top-down blocking in the data centre, where supervisors loom over workers, reinforcing institutional authority and asymmetry.


The Central Visual Argument

The film’s larger visual structure is built on contrast. Mid-shots and close-ups dominate the confined workspace, intensifying feelings of enclosure, while wide-angle compositions open up the forest and village spaces. The opposition is clear but not exaggerated: pixel versus landscape, compression versus expansion, system versus ecosystem.

One of the most striking cinematic moments is the parallel editing between the AI infant (“Guntu”) and Nehma’s own baby. Close-ups of muscle-movement data and algorithmic tracking on the screen are visually echoed with close-ups of her child’s limbs. The resemblance in framing and scale invites viewers to read them together. Without explicit dialogue, the edit suggests that the care and cognitive attention Nehma directs toward training the machine comes at the cost of attention to her own child. The visual rhyme becomes an ethical argument.

Importantly, the film handles the tension between tradition and modernisation with restraint. Shifts in colour temperature and compositional density communicate conflict subtly. The argument emerges atmospherically rather than through overt symbolic exaggeration.




Sound Design & Editing Rhythms

Sound design and editing rhythm deepen the contrast between analog life and digital labour.

In forest and domestic sequences, ambient sounds dominate: wind through leaves, insects, distant conversation, footsteps on soil. These soundscapes are layered and organic, with irregular rhythms. Silence is allowed to breathe. The editing pace is slower, with longer takes that mirror the cyclical, non-linear rhythms of ecological and family life.

By contrast, the data centre sequences feature mechanical hums, keyboard clicks, notification pings, and the low buzz of fluorescent lighting. The soundscape is repetitive and enclosed, lacking natural variation. Editing becomes tighter and more segmented, mirroring the task-based logic of annotation work clip, label, move on. The rhythm feels structured and metric, echoing productivity targets and algorithmic precision.

Occasionally, the film overlaps these sonic worlds introducing faint digital sounds over domestic scenes or allowing silence to linger in the workspace. These moments blur boundaries, suggesting how digital labour infiltrates intimate life.

Through this interplay of sound and editing, the film extends its visual contrast into the auditory realm. Analog life feels cyclical, textured, and relational; digital labour feels repetitive, compressed, and systematised. Together, these formal strategies reinforce the film’s central inquiry: how does technological modernity reshape the rhythms of human existence?


The Sound Team

In Humans in the Loop, the sonic architecture is shaped by three key collaborators:

  • Sound Design: Kalhan Raina

  • Score: Saransh Sharma

  • Editing: Swaroop Reghu & Aranya Sahay


Their responsibilities are clearly differentiated. Raina constructs the diegetic world the sounds that exist within the film’s reality. Sharma shapes the non-diegetic emotional register, guiding audience feeling through the score. The editors determine tempo and duration, controlling how long moments linger and how abruptly they cut essentially regulating how the film “breathes.”


Sound Design: Diegetic Contrast

The film’s most fundamental sonic opposition lies between organic ambient sound and mechanical ambient sound.

Forest and Domestic World

In the forest and village spaces, the soundscape is layered and polyphonic:

  • Birdsong

  • Wind moving through grass

  • Flowing water

  • Animal movement

  • Children’s voices

These sounds are irregular and unpredictable. No single element dominates the mix. Instead, they coexist simultaneously, creating a relational acoustic field. This polyphony mirrors the ecological wrldview the film associates with Adivasi life non-hierarchical, interconnected, and dynamic.

Importantly, the sound is not overly polished. Background noise, ambient interference, and sonic imperfections are retained. This refusal to “clean” the track in post-production signals authenticity and embodiment. Living sound resists reduction into a pure, isolated signal just as lived knowledge resists digitisation into fixed data.


Data Centre

By contrast, the data centre’s sonic environment contracts dramatically.

The dominant sounds include:

  • Keyboard tapping

  • Mouse clicking

  • The hum of fluorescent lighting

  • Computer boot-up tones

  • Lagging system glitches

These sounds are metronomic regular, repetitive, and uniform. Unlike the forest’s layered unpredictability, this acoustic field is controlled and mechanical. Variation is minimal. Rhythm replaces resonance.

The most politically charged sound in the film is the mouse click used to label images. Each click marks an act of classification: pest or not pest, weed or crop, human or other. The sound is dry, sharp, and final. In its smallness lies its power it sonically enacts the reduction of complex realities into binary categories. What feels like a simple administrative gesture becomes, through repetition, a form of epistemic violence.

Through this stark diegetic contrast, the film transforms sound into argument. Organic soundscapes express relational multiplicity; mechanical soundscapes embody algorithmic reduction. The ear, like the eye, is trained to perceive the tension between analog life and digital labour.


The Score: Sharma’s Compositional Strategy

In Humans in the Loop, composer Saransh Sharma crafts a soundtrack that blends organic instrumentation with textured electronic production. Drawing from ambient, post-classical, and downtempo influences, his compositions layer guitar, piano, synthesizers, and subtle field recordings. The result is intimate and tactile rather than grand or declarative. For this film, the score functions as a quiet emotional bed, allowing feeling to emerge gradually instead of directing it forcefully.

This strategy is not merely aesthetic it carries political weight.

Rather than assigning “natural” instruments exclusively to forest scenes and electronic textures to the data centre, Sharma allows both to coexist within the same sonic register. Acoustic guitar and piano often mingle with synthesised atmospheres, creating a blended soundscape. This refusal of strict separation mirrors Nehma’s own lived reality: she inhabits both ecological and digital worlds simultaneously. The score thus resists a simplistic binary between tradition and technology.

In the opening porcupine sequence, the ambient textures introduce a dream-like quality. The music gently enlarges the quiet visual moment, giving it contemplative, almost mythic resonance without exaggeration. Here, the score acts as sonic mise-en-scène it expands the emotional field of the image while maintaining restraint. The scene feels significant not because the music dramatizes it, but because it deepens its atmosphere.

Crucially, the soundtrack avoids swelling crescendos or overt emotional cues. It does not instruct the audience how to feel. Instead, it operates through understatement, leaving interpretive space open. This restraint aligns with the film’s thematic concerns: just as AI systems assign labels and meaning, cinema too can over-determine interpretation. By refusing heavy-handed musical signalling, the score places responsibility back on the viewer inviting reflection rather than emotional consumption.

In this way, Sharma’s compositional approach becomes part of the film’s broader argument. Sound does not dominate or categorise; it accompanies, coexists, and listens echoing the relational worldview the film seeks to foreground.


The Kurukh Music Problem: Authenticity vs. Accessibility

In Humans in the Loop, the negotiation between authenticity and accessibility becomes most audible in the treatment of Kurukh music. Director Aranya Sahay openly acknowledges that Oraon musical traditions do not conform to the rhythmic and structural expectations of mainstream Indian cinema audiences. Kurukh songs often shift key unexpectedly, alter rhythm mid-performance, and do not follow predictable metrical patterns like the common four-by-two structure.

This creates a representational dilemma: how can the film remain faithful to Indigenous musical forms without alienating audiences accustomed to regularised cinematic scoring? The solution lies in hybridity. As composer Saransh Sharma incorporates synthesizers, violin, and ambient textures alongside organic elements, the score becomes a bridge. These instruments are familiar enough to guide mainstream listeners, yet distinct enough to signal entry into a different sonic world.

This compromise is not concealed; it is acknowledged. That transparency itself becomes an ethical gesture. The soundtrack does not claim purity it stages negotiation. In doing so, it mirrors the film’s broader thematic tension between Indigenous knowledge systems and global technological frameworks. The score becomes a site where two epistemologies meet, overlap, and adjust to one another without fully dissolving difference.


Editing Rhythms: Pacing as Political Argument

The film’s editing, led by Swaroop Reghu and director Aranya Sahay, transforms rhythm into ideology. Because the director co-edits the film, pacing decisions are inseparable from thematic intention.

Slow Rhythm in Analog Sequences

Forest and domestic scenes unfold in longer takes with minimal cutting. Moments are allowed to linger: Nehma observing a porcupine, or pointing out plants and animals to her children. The camera does not rush. Time accumulates rather than fragments. This slower tempo encodes a non-industrial relationship to time one not structured by productivity metrics or deadlines. Attention here is patient and relational.

Tighter Rhythm in Labour Sequences

Inside the data centre, the editing becomes more rapid and segmented. Cuts increase in frequency, echoing the repetitive click-rate of image labelling. The viewer begins to process images in quick succession, mirroring the cognitive rhythm imposed on Nehma. This is a self-reflexive gesture: the edit trains the audience into the same accelerated perceptual mode that digital labour demands. The film does not simply depict algorithmic speed; it momentarily subjects viewers to it.

The Central Parallel Edit: AI Infant / Guntu

The film’s most powerful editorial statement occurs in the parallel cutting between Nehma labelling infant muscle-movement data and close-ups of her own son Guntu’s limbs at home. The rhythm of the cuts remains consistent across both spaces. The pacing does not differentiate between the biological child and the digital infant.

By maintaining identical temporal units, the edit makes a political claim: these are not separate domains but competing claims on the same resource time and attention. The equivalence in rhythm suggests displacement. The care directed toward training the machine is drawn from the same bodily and emotional reserve that belongs to her child.

Notably, the sequence does not rely on musical swelling or dramatic cues. The argument is articulated purely through the cut. Editing becomes ethics. Time itself becomes the measure of what is lost, transferred, and reallocated within the human–machine loop.


Restraint and Silence

In Humans in the Loop, director Aranya Sahay employs silence not as emptiness but as argument. His restraint particularly the refusal to over-explain or over-score allows the film’s ethical tensions to surface without slipping into didacticism.

Silence becomes most powerful after moments of conflict. When Nehma refuses to label the caterpillar as a “pest” and is reprimanded, the scene does not escalate musically. There is no swelling score to guide emotional response. Instead, the film holds the silence. This pause becomes charged space. It asks: who has the authority to define meaning? Who gets the final word? And crucially who occupies the silence that follows disagreement?

In this film, silence is not absence; it is density. It marks the gap between two knowledge systems, between speech and power, between classification and lived understanding. By withholding dramatic cues, Sahay shifts responsibility to the audience. The viewer must interpret, must sit with discomfort, must recognise the stakes without being told what to conclude.

Silence, then, functions as ethical form. It mirrors the film’s central inquiry into meaning-making. Just as AI systems assign labels to fill ambiguity, cinema too can overfill interpretive space. Here, the refusal to fill becomes political. The quiet becomes the site where the film places its most serious claims.

4. Ethical & Political Questions

Through its formal and narrative strategies, the film raises urgent ethical and political concerns:

  • Who defines “ground truth” in AI systems?

    If machine learning depends on human annotation, then the authority to define categories carries enormous power. When Indigenous knowledge conflicts with industrial labels, whose framework prevails?

  • Is digital labour a new form of extraction?

The film suggests parallels between colonial resource extraction and contemporary data extraction. Cognitive labour becomes the new raw material, harvested from marginalised communities for global profit.

What is lost in translation from relational knowledge to binary code?

When ecosystems are reduced to “pest” or “crop,” complexity disappears. The film questions whether algorithmic efficiency necessarily entails epistemic violence.

Who receives recognition and compensation?

AI appears autonomous, yet it is built on invisible human work. The ethical issue is not only wages, but intellectual authorship and acknowledgment.

Can representation avoid repeating the exclusions it critiques

By foregrounding Adivasi experience while acknowledging its own compromises (musical, linguistic, aesthetic), the film asks whether ethical storytelling is possible without negotiation.

Ultimately, the political force of the film lies in its form. Through framing, sound, editing, and silence, it insists that technology is never neutral. AI systems reflect human decisions, and those decisions are shaped by power. The film invites viewers to consider not just how machines learn, but how societies choose what counts as knowledge and whose knowledge is allowed to endure.




What Ethical Dilemmas Are Depicted When Training AI with Culturally Specific Data?

In Humans in the Loop, the process of training AI with culturally specific data exposes deep ethical tensions. The film shows that machine learning is not just a technical process but an act of knowledge selection, translation, and sometimes erasure.

1. Whose Categories Govern? The Epistemological Dilemma

One of the central ethical conflicts arises when Nehma refuses to label a creature as a “pest.” From her Oraon ecological perspective, the creature is not harmful to crops and plays a role within a balanced ecosystem. However, the annotation system demands a fixed, pre-approved category. When she deviates from this industrial classification, she is reprimanded.

This moment reveals a fundamental epistemological dilemma: AI systems depend on standardized, “universal” categories, but these categories are rarely neutral. They are shaped by agricultural industry norms, market logic, and dominant scientific frameworks. Local or Indigenous ecological knowledgerelational, contextual, and place-based does not easily fit into these predefined boxes.

When culturally specific realities are forced into machine-readable labels, nuance is often the first casualty. The richness of lived knowledge becomes compressed into binary options. The ethical issue, therefore, is not simply about accuracy it is about authority. Who decides what counts as a pest? Who defines harm? And whose worldview becomes encoded as default within global technological systems?

The film suggests that when dominant categories override Indigenous ones, AI does not merely classify the world it reorganizes it according to the priorities of those in power.


1. Whose Categories Govern? The Epistemological Dilemma


Nehma gets in trouble for failing to label a creature as a pest, because based on her community's ecological knowledge, she knows it is not harmful to crops. The film uses this scenario to highlight how AI relies on universal categories, often ignoring valuable local and indigenous knowledge when the world is forced into a handful of machine-readable boxes, nuance is the first casualty.

The ethical dilemma here is foundational: who has the authority to define "correct"? The AI system's categories are designed for an extractive agricultural economy by clients in the Global North. Nehma's knowledge built from generational, place-based observation is more accurate but inadmissible. The system treats superior knowledge as error.
2. Extraction Without Compensation The Data Colonialism Dilemma


Representative datasets, containing images of indigenous people, their languages, culture and knowledge systems, are a product of the labour of the masses. The AI image generator is arguably better off with Nehma's additions but what does she or her community get out of it? If foreign AI companies are dependent on the value generated by India's tribals, trained on their images and utilising their knowledge of nature, what is their return?

Nehma's cultural knowledge and her community's images are incorporated into a commercial AI system she will never own, profit from, or have any control over. This is data colonialism the extraction of epistemic and representational value from a marginalised community without consent, credit, or compensation.


3. Bias Reproduction The Inheritance Dilemma


In a world where machines learn to absorb human biases, Nehma comes to understand that the technology she has undertaken, like tending her children, inherits the discrimination of its labeller she can't help but question whether the system she creates will only serve to perpetuate the fate she has suffered.

The AI does not generate neutral outputs. It inherits the biases of whoever designed its classification framework. AI rides on the back of struggling low-income women labourers, and their input or their lives is rarely acknowledged a biased AI leads to misrepresentation and further isolates already marginalised communities. The ethical dilemma: Nehma is both the victim of existing bias and, through her constrained labour, the unwilling reproducer of new bias.

Conclusion: Technology and Power

Across these three dilemmas epistemological authority, data extraction, and bias inheritance the film suggests that AI is not merely technical. It is political.

AI systems are built within global power hierarchies. They decide:

  • What counts as knowledge,

  • Who benefits from data,

  • Whose identity is represented,

  • And whose reality is dismissed.

Nehma’s story reveals that the ethical crisis of AI is not about machines becoming too intelligent it is about human inequality being encoded into digital systems.

When the world is forced into machine-readable boxes, nuance disappears. When data is extracted without return, colonial patterns persist. When bias is inherited, injustice becomes automated.

The central question remains:
Can AI be reimagined not as an extractive tool, but as a collaborative system that respects local knowledge, ensures fair compensation, and actively resists reproducing bias?

That is the real ethical challenge of our technological age.

4. Representational Violence – The Image Generation Dilemma

This fourth dilemma deepens the ethical crisis by shifting the focus from classification to visibility.

When Nehma uploads her own image and those of her community into the AI image generator to correct its misrepresentation of Adivasi faces, she is attempting to repair a system that rendered her invisible. The AI previously failed to generate accurate representations because its dataset lacked culturally specific images. In order to be properly “seen,” Nehma must now contribute her own likeness.

This creates a powerful paradox.

On the surface, representation appears empowering. Diverse datasets promise inclusivity. AI companies increasingly demand “culturally representative” data to fix algorithmic bias. However, the film subtly exposes a deeper problem: representation itself becomes another site of extraction.

The images of marginalized communities are:

  • Produced through lived experience and social reality,

  • Collected through labor (often low-paid or unpaid),

  • Incorporated into proprietary commercial systems,

  • Monetized without ownership or consent from those represented.

Thus, representational correction becomes representational capture.


The Violence of Visibility

The dilemma is not simply about misrepresentation it is about the conditions under which representation occurs.

To correct the AI’s blindness, Nehma must surrender her image to the very system that erased her. Her visibility depends on participation in a digital economy that does not recognize her humanity, agency, or rights. She is visible as data, not as a subject.

This is what can be called representational violence:

  • Her identity becomes a dataset.

  • Her cultural markers become training material.

  • Her face becomes a commodity.

The violence lies not in physical harm but in the reduction of personhood to digital resource.

The system does not see her as Nehma a woman with knowledge, history, and dignity. It sees her as a data point improving model accuracy.

The Ethical Trap

This is perhaps the film’s most precise ethical trap:

  1. If Nehma refuses to contribute her image, AI continues misrepresenting her community.

  2. If she contributes, she strengthens a commercial system that profits from her identity without accountability.

Either way, she remains structurally powerless.

This is not inclusion it is conditional recognition.
She can be visible only if she becomes usable.

The Broader Implications

The dilemma raises urgent questions for contemporary AI development:

  • Who controls cultural representation in machine-generated imagery?

  • Can inclusion occur without exploitation?

  • Is diversity meaningful if ownership and profit remain centralized?

  • What does consent mean in data economies shaped by inequality?

In many cases, marginalized communities are asked to “fix” AI bias by offering more data. Yet they are rarely offered:

  • Equity,

  • Revenue sharing,

  • Governance power,

  • Or control over how their images are used.

Thus, representation without redistribution becomes another form of structural domination.


Conclusion: From Erasure to Capture

Earlier dilemmas exposed epistemological exclusion and data extraction. This one reveals a subtler form of power: the transformation of identity into asset.

Nehma’s struggle shows that visibility within AI systems does not automatically equal justice. Being represented is not the same as being respected.

The ethical challenge, therefore, is not merely to diversify datasets but to transform the power relations behind them. Without structural change, even correction becomes co-option.

The film leaves us with a haunting question:

Is it liberation to be seen by the machine, if being seen requires surrendering oneself to it?


5. The Hierarchy of Accountability – The Structural Dilemma

This final dilemma moves beyond individual bias or exploitation and exposes something deeper: the structure itself is engineered to dissolve responsibility.

The film demonstrates how AI systems are not neutral technical objects. They are shaped at multiple levels:

  • Dataset guidelines determine what counts as valid data.

  • Client briefs define the purpose and desired outputs.

  • Workplace hierarchies control how tasks are distributed and evaluated.

Each layer encodes assumptions about what matters, what is normal, what is profitable, and what is expendable.

Nehma does not invent the categories she applies. She does not design the classification framework. She does not decide which faces are “representative.” She works within constraints defined elsewhere.


The Chain of Power

The ethical chain portrayed in the film is clear:

American tech client → Indian centre manager → Adivasi data worker

At the top, a corporate client in the Global North sets the objectives.
In the middle, a local manager enforces productivity metrics and compliance.
At the bottom, Nehma executes labeling tasks under strict guidelines.

Each actor appears to “just be doing their job.”

This creates a diffusion of responsibility:

  • The client claims they only requested a product.

  • The manager claims they only enforced policy.

  • The worker claims she only followed instructions.

When harm occurs misrepresentation, bias, erasure no single individual appears directly culpable. Accountability evaporates into bureaucracy.

The Myth of AI Neutrality

AI is often described as objective because it is statistical and automated. However, the film reveals that neutrality is undermined long before an algorithm generates output.

Neutrality collapses when:

  • Certain faces are over-represented in training data.

  • Indigenous categories are excluded from taxonomies.

  • Economic efficiency overrides cultural specificity.

  • Marginalized workers lack decision-making power.

The system is not neutral; it reflects the hierarchy that built it.


Outsourcing as Ethical Insulation

The architecture of outsourcing plays a crucial role. By geographically and economically separating:

  • Those who design,

  • Those who manage,

  • And those who label,

the system ensures that moral consequences are distanced from decision-makers.

Outsourcing becomes a form of ethical insulation.

The people most affected by bias (like Nehma’s community) are the least empowered to challenge it. Meanwhile, those with the authority to change the system remain abstracted from its lived consequences.

When Director Sahay asks, “When are we going to take responsibility as humanity for the kind of algorithms we’re building?” the question exposes this structural gap.

Responsibility is everywhere and nowhere at once.

The Structural Dilemma

The dilemma, therefore, is not simply about individual prejudice or exploitation. It is about institutional design.

If:

  • Power flows downward,

  • Profit flows upward,

  • And accountability dissolves sideways,

then injustice becomes systemic rather than accidental.

The architecture itself prevents responsibility from settling anywhere.

Conclusion: Designing for Accountability

Across all five dilemmas epistemological authority, data colonialism, bias inheritance, representational violence, and structural hierarchy the film reveals a consistent truth:

AI is not just code. It is an ecosystem of power.

Nehma’s position at the bottom of the chain makes her visible as labor but invisible as agent. The ethical crisis lies not only in biased outputs but in a global structure that fragments responsibility so effectively that no one feels answerable.

The film ultimately challenges us to rethink AI governance:

  • Who has decision-making power?

  • Who shares in the profits?

  • Who bears the harm?

  • And who is held accountable?

Until responsibility is structurally anchored rather than rhetorically shared the promise of ethical AI will remain an illusion.

How does the film’s human-in-the-loop metaphor operate beyond the technical term politically, socially, and culturally?


The "Human-in-the-Loop" Metaphor Beyond the Technical Term

The Technical Term  Briefly

In machine learning, human-in-the-loop (HITL) refers to a system design where a human overseer corrects algorithmic errors, validates outputs, and improves the model's accuracy over iterative cycles. The human is positioned as a check on the machine. The term implies agency, authority, and mutual benefit.

The film accepts this definition as its starting point  then systematically dismantles every assumption embedded in it.


Politically: The Loop as Colonial Circuit

The title alludes to the closed-loop relationship between humans and technology  one "programs" the other and vice versa, forever. But the film makes visible that this loop is not symmetrical. It has a direction: value flows upward from Jharkhand to Silicon Valley; constraint flows downward in return.

The political loop the film exposes is data colonialism  a 21st century structure that mirrors 19th century colonial extraction. Raw material (Nehma's cognitive labour and cultural knowledge) is extracted from the periphery, processed at the centre, and returned as a finished product (AI) in which the original producer has no ownership, profit, or control. The film engages one of the most urgent global conversations of our time: how we can prevent social inequities from making their way into AI and instead use it to enhance underrepresented voices.

Crucially, the political loop is self-reinforcing. The AI trained on biased data produces biased outputs. Those biased outputs further marginalise the communities whose knowledge was extracted to build the system. Nehma corrects the machine, but the machine's economic architecture is correcting her in return  telling her which knowledge is valid, which categories are acceptable, which version of the world is real. She is in the loop, but the loop is not in her interest.

Socially: The Loop as Caste and Gender Structure

The film uses Nehma's work at the AI data lab as a site for exploring themes of caste, assimilation, and the desire to belong to a world that seems more "legitimate" in the eyes of dominant society.

Nehma enters the film already caught in multiple social loops she did not design:

The loop of gender: as a woman, her labour  domestic and digital  is treated as naturally available, endlessly renewable, and minimally compensable. There may even be a hidden metaphor for the way our world operates  characterised by a lack of more feminine and nurturing values in favour of more masculine ambitions of dominance and control. The loop of caste: a profession that thrives on binary labels is outsourced to people whose plurality transcends labels. Humans are instructed to think like machines in order to instruct machines to act like humans. A marginalised Indian's social conditioning is at odds with a job that formalises societal bias. 

The loop of class: Dhaanu, Nehma's daughter, prefers her urban, upper-caste father  gravitating toward a world that appears more "legitimate." This is the social loop working through the next generation: assimilation reproducing the very hierarchy that excludes the assimilated.

The loop metaphor here means that social exclusion is not a one-time event but a recursive system  one that reproduces itself across generations, across institutions, and now across algorithms.


Culturally: The Loop as Epistemological Erasure

This is the film's deepest and most original extension of the metaphor. In a training session at the AI centre, Nehma's supervisor tells her that artificial intelligence is like a child it must be taught how to see the world. This metaphor becomes a central ideological battleground. If AI is indeed a child, who becomes the teacher? What values and assumptions are encoded in the data it consumes? The cultural loop operates as follows: Nehma possesses a sophisticated, relational, ecologically precise knowledge system. She is hired to transmit this and other knowledge to an AI. But the transmission is filtered through categories she did not design. A caterpillar must be either a pest or not a pest. A plant must be a weed or a crop. The cultural loop does not absorb Nehma's knowledge it translates it into a form that erases what makes it valuable, then feeds it back into the world as "objective data." The AI learns from her but cannot see her.

This is the cultural loop in its most precise form: unlike the thousands of images and videos she goes through every day, Nehma isn't even afforded the dignity of being a data point. She teaches the system to see, but the system cannot return the gaze. The Reciprocal Loop: Who Is Training Whom?

The film's sharpest political-cultural insight is that the loop runs in both directions but not equally. One "programs" the other and vice versa, forever. The machine trains on Nehma's knowledge. But Nehma is simultaneously being trained by the machine being told to suppress her ecological intuition, to accept industrial categories, to think in binaries. To programme the robot, she must think robotically too. The cultural violence of the loop is therefore not just extraction it is substitution: the AI economy does not merely take Nehma's knowledge; it replaces it with a degraded, commodified version of itself and asks her to use that replacement as her new cognitive standard. A Dissenting Voice: What the Loop May Not Resolve It is worth noting that not all critics accept the film's political architecture as fully realised. The film claims to speak about marginalisation while marginalising the very community it depicts. It critiques data bias but exhibits cultural bias. It condemns invisibility but erases the Adivasi lived experience. This critique from within the Adivasi community points to an uncomfortable meta-irony: the film itself may be operating in a loop analogous to the one it critiques, extracting Adivasi cultural material for a global festival circuit audience without sufficient return to the community depicted.

This does not invalidate the metaphor. It deepens it suggesting that the human-in-the-loop problem is not confined to the data centre but extends to the film industry, the festival circuit, and the very act of representation itself.


POST-VIEWING REFLECTIVE ESSAY TASKS

TASK 1 — AI, BIAS, & EPISTEMIC REPRESENTATION Critical Reflection: Humans in the Loop (2025) Technology, Knowledge, and the Politics of the Algorithm Introduction In the dominant cultural imagination, artificial intelligence is presented as a self-generating, self-correcting, and therefore politically neutral technology. Humans in the Loop (2025), directed by Aranya Sahay, refuses this mythology from its opening frame. Set in the Adivasi communities of Jharkhand, the film follows Nehma an Oraon woman who takes work as a data labeller at an AI training centre and uses her experience to expose a foundational contradiction: that a technology marketed as objective is, in fact, built on the classified, extracted, and systematically devalued knowledge of the world's most marginalised people. This essay argues that Humans in the Loop represents algorithmic bias not as a technical error awaiting correction but as a culturally situated and ideologically enforced condition, rooted in epistemic hierarchies that determine whose knowledge counts, whose categories govern, and whose existence is permitted to be "seen" by the machine. Drawing on Louis Althusser's concept of ideological state apparatuses, Jean-Louis Baudry's apparatus theory, and Stuart Hall's theory of representation and encoding/decoding, this essay reads the film as both a narrative about AI labour and a formal intervention into the politics of knowledge itself. Algorithmic Bias as Cultural Situatedness The central epistemological confrontation of the film is staged through a single, deceptively simple object: a caterpillar. Nehma, instructed to label it as a "pest" for an AI-powered agricultural system, refuses. Drawing on her Oraon ecological knowledge built from generational, place-based observation she understands that the caterpillar consumes only rotting plant matter, and is therefore not destructive but regenerative. Her classification is more accurate. Her supervisor Alka, squeezed between the demands of an American tech client and the productivity targets of the centre, overrules her. Nehma is told, explicitly, not to use her brain. This scene operationalises what scholars of science and technology studies call situated knowledge the argument, developed by Donna Haraway, that all knowledge is produced from a specific position, and that claims to universal objectivity are themselves expressions of power. The AI's categorical framework pest/non-pest, weed/crop is not neutral. It is designed within and for an extractive, monocultural, industrial agricultural economy. It encodes the priorities of its designers: maximise yield, eliminate deviation, streamline classification. Nehma's knowledge system does not fit these categories because it operates through a different ontology altogether one that understands ecological relationships as contextual, reciprocal, and irreducible to binary opposition. Critically, the film does not present this as a cultural misunderstanding to be resolved through better training data. The supervisor's instruction to "not use your brain" reveals that the suppression of Nehma's knowledge is structural, not accidental. The AI system does not merely fail to accommodate Adivasi ecological knowledge it is architecturally incompatible with it, because to accommodate it would be to challenge the industrial categories that make the system commercially valuable to its clients. Algorithmic bias, in this framing, is not a bug in the system. It is the system functioning exactly as designed by those with the power to design it. Apparatus Theory and the Ideological Screen Jean-Louis Baudry's apparatus theory developed in his landmark essay "Ideological Effects of the Basic Cinematographic Apparatus" (1974) argues that the cinema apparatus is not a neutral recording device but an ideological machine. The positioning of the camera, the organisation of spectatorial vision, and the very form of projected cinema naturalise a particular, centred, sovereign subject as the norm. Applied beyond cinema to technological apparatus in general, Baudry's framework allows us to ask: what does the AI system, as an apparatus, make visible, and what does it systematically render invisible? Humans in the Loop stages this question through the film's most formally precise scene. When Nehma and members of her community interact with an AI image generator and prompt it to produce an image of a tribal woman, the system generates a pale, light-haired, Europeanised figure. The AI literally cannot see Nehma. It has been trained on a dataset in which Adivasi faces, bodies, and aesthetic norms are either absent or subordinated to a hegemonic visual norm and it reproduces that norm as "universal." This is the apparatus operating ideologically: it presents the output not as the outcome of biased training data but as simply "what a woman looks like."
Louis Althusser's concept of ideological state apparatuses extends this analysis. For Althusser, ideology does not merely distort reality it constitutes subjects, interpellating them into positions within the social order. The data-labelling centre functions in the film as precisely such an apparatus: it does not merely exploit Nehma's labour, it attempts to reconstitute her as a cognitive subject compatible with its requirements. She is trained to suppress her own knowledge, to accept industrial categories, to think in binaries. To programme the machine, she must first allow the machine's logic to programme her. The human-in-the-loop, in this reading, is not the overseer of the machine but its ideological product.


Epistemic Hierarchy: Whose Knowledge Counts?

Stuart Hall's theory of representation the argument that meaning is not inherent in objects but produced through systems of representation that are always organised by power provides the framework for reading the film's deepest claim. The film stages a direct confrontation between two systems of representation: Adivasi ecological knowledge, which is relational, contextual, and place-based, and the AI classification system, which is universal, binary, and designed for scalability. The epistemic hierarchy the film exposes is not simply that one system is ranked above the other. It is that the dominant system denies the subordinate system the status of knowledge at all. Nehma's understanding of the caterpillar is treated not as a rival classification but as an error a failure of correct labelling. This is what philosophers of knowledge call epistemic injustice Miranda Fricker's term for the harm done to a subject specifically in their capacity as a knower. Nehma is not merely underpaid; she is epistemically dismissed. Her testimony about the living world is inadmissible within the system she has been hired to improve. The film reinforces this hierarchy through the chain of authority it depicts. The American tech client, never physically present but audible on a Zoom call, defines what "correct" labelling means. The Indian centre manager transmits and enforces this definition. Nehma's expertise is positioned at the bottom of this chain most proximate to the data, most distant from the definition of accuracy. Dataset guidelines, client briefs, and workplace hierarchies encode dominant assumptions into the material Nehma is asked to produce, tracing how AI's "neutrality" is undermined by who gets to name things, whose faces are over-represented, and whose stories never enter the frame. Cinematic Form as Critical Argument What elevates the film beyond documentary critique is the way its formal choices actively participate in its argument. The 1.55:1 near-square aspect ratio mimicking the shape of a computer monitor positions the audience inside the machine's frame of vision. We watch Nehma through the same screen she watches images on. This is apparatus theory made literal: the film places us in the position of the classifier, and asks us to notice what we cannot see within the frame. The cinematography is sensitive and tactile filled with the texture of rock, the swaying of grass, the expressions on faces painting a robust picture of lives lived in communion with the natural world. The forest sequences use natural light, organic composition, and low camera angles that place human and animal at the same horizontal level encoding a non-hierarchical, relational world-view. The data centre, by contrast, is characterised by fluorescent flatness, tight framing, and the blue-white glow of the computer screen casting light onto Nehma's face. In this visual economy, the machine illuminates the human power flows from screen to body, not the reverse. The cinematography does not merely illustrate the political argument; it enacts it. The film's most formally precise political statement is the parallel editing sequence in which Nehma labels infant muscle-movement data to train an AI walking model, while in cross-cut her own son Guntu takes his first steps unseen. The editor holds both sequences in identical rhythm. The argument made through the cut is not sentimental but structural: the same unit of cognitive attention, the same quality of care, is displaced from child to machine. The labour that builds artificial intelligence is purchased with the currency of lived human experience. Conclusion Humans in the Loop makes a precise and urgent argument: algorithmic bias is not a technical problem awaiting a technical solution. It is the expression of epistemic hierarchies that predate the algorithm and will survive its correction unless the underlying politics of knowledge are confronted. Through Nehma's experience, the film demonstrates that the categories embedded in AI systems are culturally situated designed within and for dominant economic interests and that the communities whose knowledge and labour are extracted to build these systems are systematically denied the authority to define what counts as correct. Apparatus theory illuminates how the AI system, like cinema itself, naturalises its ideological positioning as neutral vision. Hall's theory of representation shows how meaning is always organised by power and that the AI's failure to see Adivasi faces is not an oversight but a structural outcome of whose visual norms dominate the training pipeline. Althusser's framework reveals the data centre as an ideological apparatus that does not merely exploit but reconstitutes its workers as subjects compatible with the system's requirements. The film asks a question that no algorithm can answer: when are we going to take responsibility, as humanity, for the kind of systems we are building? That question belongs not to the data centre but to the political economy that designed it and to the audiences, critics, and scholars equipped to name what the machine cannot see.


References:


Alonso, D. V. Imagining AI Futures in Mainstream Cinema: Socio-Technical Narratives and Social Imaginaries. AI & Society, 2026.

Anjum, N. “Aranya Sahay’s Humans in the Loop and the Politics of AI Data Labelling.” The Federal, 2026. Barad, Dilip. “Humans in the Loop: Exploring AI, Labour and Digital Culture.” Blog post, Jan. 2026. Bazin, André. What Is Cinema? Vol. 1, University of California Press, 1967. Bordwell, David, and Kristin Thompson. Film Art: An Introduction. 12th ed., McGraw-Hill Education, 2019. Cave, Stephen, et al. “Shuri in the Sea of Dudes: The Cultural Construction of the AI Engineer in Popular Film, 1920–2020.” Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines, Oxford University Press, 2023. Deleuze, Gilles. Cinema 1: The Movement Image. Translated by Hugh Tomlinson and Barbara Habberjam, University of Minnesota Press, 1983. Frías, C. L. “The Paradox of Artificial Intelligence in Cinema.” Cultura Digital, vol. 2, no. 1, 2024, pp. 5–25. Göker, D. “Human-like Artificial Intelligence in Indian Cinema: Cultural Narratives, Ethical Dimensions, and Posthuman Perspectives.” International Journal of Cultural and Social Studies, vol. 11, no. 2, 2025, pp. 1–10. Haris, M. J., et al. “Identifying Gender Bias in Blockbuster Movies through the Lens of Machine Learning.” Humanities and Social Sciences Communications, vol. 10, 2023. Humans in the Loop (film). Wikipedia entry, retrieved Feb. 2026. Indian Express Editorial. “Humans in the Loop: Technology, AI and Digital Lives.” The Indian Express, 2026. McDonald, Kevin. Film Theory: The Basics. 2nd ed., Routledge, 2023. Sahay, Aranya, director. Humans in the Loop. India, 2024. Shepherdson, Charles, et al., editors. Film Theory: Critical Concepts in Media and Cultural Studies. Vols. 1–4, Routledge, 2004. Sui, Z., and S. Wang. “Dogme 25: Media Primitivism and New Auteurism in the Age of Artificial Intelligence.” Frontiers in Communication, vol. 10, 2025. Vighi, Fabio. Critical Theory and Film: Rethinking Ideology through Film Noir. Bloomsbury Academic India, 2019.
Yu, Y. “The Reel Deal? An Experimental Analysis of Perception Bias and AI Film Pitches.” Journal of Cultural Economics, vol. 49, 2025, pp. 281–300.


Thank you...!!!

Be Learners.












A Dance of the Forests by Wole Soyinka Hello learners. I'm a student. I'm writing this  blog as a part of thinking activity. This ta...