Research

Our Research Question

At PIL, everything starts with a deceptively simple question:
“How good can we be to each other?”

We investigate peace as a behavioral phenomenon—something that can be designed for, measured, and scaled across boundaries of culture, politics, and identity.

Our Methodologies

Our research blends behavioral science, engineering, and data-driven design. We focus on:

  • Behavior Design
    Applying the Stanford Fogg Behavior Model and the Person-Action-Context (PAC) model to understand and shape pro-social behaviors.

  • Game Design Thinking
    Using loops, feedback, and play dynamics to design environments that encourage repeatable, positive engagement.

  • Data Science for Peace
    Developing new metrics and data standards, such as the Peace Data Standard, that allow us to quantify trust, cooperation, and cross-boundary interaction in real time.

  • Systems Prototyping
    Building and testing interventions—digital, financial, civic, or organizational—that can be replicated and scaled.

Research Domains

  • Peace Innovation
    Exploring how new products, services, and systems foster trust, collaboration, and mutual value creation.

  • Peace Engineering
    Treating peace outcomes as design criteria in the built environment, infrastructures, and technologies.

  • Peace Tech
    Prototyping tools that augment flourishing, not just reduce harm.

  • Peace Finance
    Designing metrics, credits, and investment instruments that allow companies and capital markets to recognize and reward pro-social outcomes.

Current Focus: AI as Persuasive Technology

AI is the most powerful new form of persuasive technology in human history.
Just as social media reshaped behavior at scale over the last two decades, AI is now shaping how we work, learn, communicate, and collaborate—often invisibly.

Our research explores:

  • How AI is being designed to influence human decision-making, habits, and relationships.

  • How AI might inadvertently amplify bias, conflict, or exclusion.

  • How AI can instead be engineered to foster positive peace: empathy, cooperation, fairness, and sustainable flourishing.

  • The possibility of AI as Peace Tech—a co-pilot that augments human dignity and capability, rather than replacing people.

We are building the conceptual and measurement frameworks that allow policymakers, companies, and communities to see AI not only as a technology, but as a behavioral environment with profound peace and justice implications.

Frameworks We’ve Created

  • Peace Data Standard — a practical and theoretical framework for measuring peace outcomes in digital interactions.

  • Minimum Acceptable Peaceful Interactions (MAPIs) — metrics for detecting and scaling the smallest unit of peaceful engagement.

  • Peace Credits — an emerging concept analogous to carbon credits, designed to reward individuals, organizations, and communities for creating measurable positive peace.

From Research to Practice

Our research does not stay in the lab. It is continually tested in the field—through city labs, corporate partnerships, cultural transformation projects, and citizen diplomacy campaigns.

For examples of how our research becomes practice, see our Projects page.
For peer-reviewed outputs and conceptual frameworks, visit our Publications.