Rethinking AI in Futures Work
Artificial intelligence is often presented as a tool to accelerate foresight—processing data faster, scaling predictions, and producing scenarios at speed. Sarah Pink’s recent work argues that this view is incomplete. AI is not just a technical instrument; it functions as an anticipatory infrastructure.
This means AI shapes:
- How futures are imagined
- How organizations act on uncertainty
- How research methods evolve
For businesses, this shifts the focus from AI as a quick-fix predictor to a system that underpins strategic imaginaries, requiring careful integration with human expertise to address real-world contingencies.
Key Concepts
Pink introduces two foundational concepts that reshape how we think about AI in futures work:
Anticipatory Infrastructures
AI systems that enable visions of possible futures, even when they don't deliver concrete solutions. Like symbolic "sewers" that enable but remain unseen.
Anticipatory Realism
The process where people correct AI-driven futures with everyday expertise, ensuring scenarios align with lived experience.
“Pink draws on infrastructure theory to describe AI as something that supports visions without necessarily delivering them outright.”
In organizational terms, AI can generate promissory futures—such as optimized supply chains or personalized customer experiences—but its true utility emerges when combined with qualitative methods that ground these in everyday realities.
Polycrisis and the Demand for Foresight
Global institutions describe the present as a “polycrisis”—climate change, geopolitical instability, economic shocks, and rapid technological shifts converging at once.
In response:
- Governments, businesses, and futurists are building futures knowledge markets
- AI is positioned as a solution
- Social science perspectives remind us that futures are not linear projections
Sector Impact
Businesses in sectors like energy or logistics face polycrisis directly:
- Supply disruptions from geopolitical tensions
- Resource scarcity from environmental changes
Pink notes that AI contributes to polycrisis by entangling with resource-intensive economies, yet it’s also pitched as a navigator. Organizations must engage here critically, incorporating social science methods to ensure foresight accounts for nonlinear societal shifts.
AI’s Dual Role
Together, anticipatory infrastructures and anticipatory realism highlight AI’s role in structuring futures discourse while requiring human judgment to keep it realistic.
Practical Example
Consider a manufacturing firm using AI for scenario planning:
| AI Output | Human Correction |
|---|---|
| Projects cost savings from automation | Reveals overlooked skill gaps |
| Models efficiency improvements | Identifies employee resistance |
| Generates optimistic timelines | Grounds in practical constraints |
This duality is practical. The tool might project cost savings from automation, but employee input—via anticipatory realism—could reveal overlooked factors, leading to more viable strategies.
Pink emphasizes that futures are difficult to define:
- Often treated as linear in business narratives
- But uncertain in social science views
This calls for methods that blend AI with ethnographic or speculative approaches, allowing organizations to contest dominant visions and build resilience.
Side Effects Worth Noting
AI’s impact is not limited to technical outputs. Its side effects include:
- Methodological experiments — ethnographic futures, speculative design
- Reconfigured expertise — new roles and collaborations
- Generated imaginaries — influence organizational strategy, even when speculative
AI Methods Evolution
Pink discusses how AI methods evolve incrementally, with “biographies” across projects. For organizations, this suggests:
- Treat AI integration as an ongoing process, not a one-off implementation
- Use AI for initial pattern detection, then layer qualitative insights
- Pay attention to epistemological side effects—new ways of knowing through human-AI interfaces
Implications for Strategy
For leaders and decision-makers, several lessons emerge:
1. AI is Not Sufficient on Its Own
It structures imaginaries but requires human ethics, values, and contextual insight.
Action: Build interdisciplinary teams, drawing from anthropology, psychology, and sociology alongside data scientists.
2. Qualitative Methods Remain Essential
Participatory and ethnographic approaches provide depth that AI-driven foresight cannot replicate.
Action: Use AI to analyze trends, then conduct workshops to apply anticipatory realism, aligning projections with user behaviors.
3. Interdisciplinary Collaboration is Critical
Futures research benefits from diverse fields.
Action: Adopt reflexivity in strategy sessions, questioning AI biases and incorporating broader resources like employee expertise.
4. Scenario Use Must Be Reflexive
AI-generated futures are provisional, not definitive.
Action:
- Pilot AI in small-scale foresight exercises
- Iterate based on feedback
- Document method biographies for future reference
- Audit AI outputs for potential harms
The Rise of AI-Driven Foresight
Pink situates AI’s rise in foresight within crisis contexts, noting academic proposals for hybrid AI-expert frameworks to detect emerging topics.
Organizational Response
Governments and businesses are adopting similar approaches:
- Australia’s guidelines emphasize human oversight in AI foresight
- Tools that combine AI with qualitative elements are emerging
- Conversational choreography for nuanced human-AI interactions
The urgency lies in the futures knowledge market’s expansion; organizations that integrate AI as an anticipatory infrastructure position themselves to influence it, rather than react.
Challenges in Implementation
Implementing this framework requires addressing:
| Challenge | Consideration |
|---|---|
| Biases | Audit training data for representation gaps |
| Skills shortages | Invest in interdisciplinary training |
| Ethical debates | Philosophical questions about AI intelligence and harms |
| Transparency | Foster cultures where side effects are explored productively |
Pink notes AI’s contested nature—philosophical questions about intelligence and harms must inform usage. Businesses should prioritize transparency and foster cultures where side effects are explored productively.
Conclusion
AI’s promise in foresight lies less in prediction than in provocation.
It enables new ways of imagining futures, but its outputs must be tempered by:
- Human judgment
- Qualitative insight
- Reflexive practice
For organizations navigating uncertainty, the challenge is not to adopt AI uncritically, but to integrate it into reflexive, interdisciplinary futures work.
“AI as an anticipatory infrastructure invites actions toward the unknown, reconfiguring methods and expertise.”
In short: AI can accelerate foresight, but wisdom comes from how we use it to build grounded, resilient strategies.
References
- Pink, S. (2025). Artificial Intelligence and the Futures Turn: An anticipatory infrastructure for qualitative methods. Qualitative Research in Psychology. DOI: 10.1080/14780887.2025.2570167
- Garsten, C. & Sörbom, A. (2021). On professional futurists and the futures knowledge market.
- Geurts et al. (2022). Hybrid AI-expert frameworks for emerging topic detection.
- Marres et al. (2024). Super-controversy framework for AI harm auditing.
- Omena et al. (2024). GenAI methodology maps.