Last updated: ...
This Storyboard - which we call our "stain" chart - shows you at a glance how strong or weak a given narrative is right now relative to its history.

For each narrative or "semantic signature" listed on the left of the chart, we have a series of blue dots on the right, each of which represents a specific weekly density or volume of that narrative. reading from within the date range that we are covering. The red arrow is the most recent reading, so it's just like the "YOU ARE HERE" spot on a map. The x-axis scale shows the range of index values.  If a dot is at 100, that means that story is 100% more present in media than usual. If it’s at 0, it means it’s at its normal level.

The light blue shaded box covers the middle 50% of readings across the date range, so you can see quickly if the current reading is typical (inside the blue box), depressed (left of the blue box), or elevated (to the right of the blue box).

If you hover over a specific blue dot, you will see the specific date and measurement that the dot represents.

The Pulse

Courts Gain Credibility and Executive Power Concerns Ease as AI Governance and Surveillance Reshape American Institutional Narratives

Executive Summary

- Aggressive federal AI governance is being normalized rather than contested. Media concern about excessive presidential power dropped sharply this period—the largest single movement across all monitored signatures—even though the White House continues to pursue centralized AI policy through executive orders, a proposed federal preemption framework, and a new litigation task force aimed at challenging state AI laws. At the same time, language embracing constitutional flexibility remains well above average, suggesting that the media environment has absorbed assertive executive action into the baseline of normal governance and may offer less narrative resistance to further executive-led policymaking.

- The judiciary is emerging as the most credible institutional actor in AI's legal frontier. Praise for the Supreme Court as a defender of liberty strengthened—the only signature to rise this period—while criticism of courts for overreach or activism stayed muted and below average. This favorable positioning arrives precisely when courts are being asked to resolve foundational questions about AI authorship, attorney-client privilege with AI platforms, and the validity of federal efforts to preempt state regulation. With Congress stalling on preemption legislation, the courtroom rather than the legislature appears to be the venue where AI governance disputes will be settled, and current narrative conditions grant the judiciary a deep reservoir of public legitimacy for that role.

- AI-powered surveillance is driving one of the sharpest and most persistent narrative imbalances Perscient monitors. Critical language about abusive policing remains well above average while language defending police as dedicated public servants stays well below it, producing a gap of roughly 60 points that has held steady. High-profile cases—facial recognition misidentification, license plate reader abuse by officers, and the expansion of real-time AI monitoring infrastructure—continue to fuel the critical side of the ledger, and Congressional scrutiny of AI companies' cooperation with government surveillance programs is now adding a new dimension to this pressure.

- Narratives celebrating foundational legal safeguards—presumption of innocence, equal application of law, and the effectiveness of checks and balances—remain flat or suppressed across the board. None of these signatures rose during the period, even though courts, Congress, and the executive branch are all actively contesting the boundaries of AI policy. The absence of robust "institutional good faith" language suggests that media discourse has moved toward demanding systemic accountability rather than affirming that existing structures are working as designed, a condition that leaves critical narratives about both policing and executive authority largely unchecked by countervailing institutional confidence.

---

Executive Overreach Narratives Moderate Even as Federal AI Policy Centralizes Through the White House

Perscient's semantic signature tracking the density of language arguing that presidential power has grown excessive declined by 20 points to an Index Value of 28, the largest single movement across all monitored signatures this period. While the narrative remains stronger than average, the drop signals a meaningful recalibration in how media frames executive authority. This moderation is striking not because the White House has pulled back on assertive policymaking, but because the media environment appears to be absorbing that assertiveness into the baseline of normal governance.

On December 11, 2025, President Trump signed Executive Order 14365, "Ensuring a National Policy Framework for Artificial Intelligence," which explicitly identifies "excessive state regulation" as an obstacle to the administration's AI ambitions. The order directs the Attorney General to challenge state AI laws in federal court through a newly established AI Litigation Task Force. On March 20, 2026, the White House released a four-page blueprint directing Congress to adopt a unified federal approach to AI governance, organized around six broad objectives including protecting children online and promoting innovation. The framework's most controversial element is its call for federal preemption of state AI laws, framing the current "patchwork" of state-level regulation as contrary to innovation and American competitiveness.

Yet Congress has so far declined to follow the White House's lead on preemption. The Republican-controlled Congress failed to include a provision in its budget reconciliation bill that would have banned state-level AI regulation for 10 years; the measure was removed as "extraneous matter." A parallel effort to insert a blanket moratorium into the NDAA was also withdrawn. On social media, advocates have warned that "federal AI preemption would erase that progress overnight," while others have called on Congress to simply "pass AI preemption" and move on.

Despite this Congressional pushback, Perscient's semantic signature tracking the density of language arguing that authorities should not let the Constitution get in the way of doing the right thing holds at an Index Value of 38, well above average, and stayed flat. This elevated reading indicates a media environment receptive to constitutional pragmatism, even as targeted criticism of executive overreach recedes. The gap between these two signatures is instructive: broad acceptance of institutional flexibility (38) persists alongside a marked easing of concern about executive expansion (28, down sharply). The pattern suggests that the administration's AI governance posture is becoming normalized rather than contested.

Meanwhile, our semantic signature tracking the density of language praising the effectiveness of checks and balances sits at an Index Value of -2, essentially at its long-term average and unchanged. The absence of strong feeling about separation of powers suggests that media coverage has not yet framed the federal-state AI governance battle primarily as an institutional-balance story. Legal analysts have noted that "the immediate consequence of the Executive Order is legal ambiguity," and the validity of targeted state laws will likely be determined through prolonged litigation. As of April 2026, many agencies appear not to have taken the various actions ordered by the EO despite the passage of their respective deadlines, introducing operational uncertainty even as the White House's rhetorical posture remains assertive.

The declining salience of imperial-presidency language alongside elevated constitutional-flexibility language suggests that aggressive federal AI governance has been absorbed into the media baseline, potentially creating conditions in which executive-led policy can advance with less narrative resistance, even as its legal authority faces courtroom challenges.

Judicial Institutions Emerge as the Trusted Arbiters of AI's Legal Boundaries

Executive overreach narratives are moderating, and confidence in the judiciary appears to be rising to fill a part of the institutional void. Perscient's semantic signature tracking the density of language praising the Supreme Court as a defender of liberty rose by 4 points to an Index Value of 21, the only signature to strengthen over the most recent period. The narrative is now stronger than average at a moment when courts are increasingly asked to draw the lines that legislators have not.

Our semantic signature tracking the density of language arguing that courts are legislating from the bench or interfering with the executive sits at -19, below the long-term average and flat. The combination of rising judicial praise and muted overreach criticism positions the courts favorably in public discourse, giving them a reservoir of legitimacy as they take on some of the most complex questions in technology law.

The Supreme Court reinforced this role on March 2, 2026, when it denied certiorari in Thaler v. Perlmutter, leaving intact the D.C. Circuit's ruling that the Copyright Act requires copyrightable works to be authored by a human being. Legal commentators summarized the emerging consensus: "Only human beings can be inventors or authors. AI, no matter how sophisticated, is a tool—not a creator."

In United States v. Heppner, Judge Rakoff of the Southern District of New York ruled on what the court called "a question of first impression nationwide": whether written exchanges between a criminal defendant and the generative AI platform Claude were protected by attorney-client privilege or the work product doctrine. The court held they were not, reasoning that Claude is not an attorney, no attorney-client relationship existed, and inputs to the platform were not confidential under Anthropic's privacy policy. Legal professionals warned that "every message you send to ChatGPT, Claude, or any other chatbot is electronic communication with a third party" and therefore discoverable in court.

Both Thaler and Heppner illustrate the judiciary's emerging role as a line-drawing institution for AI's legal boundaries, delineating where AI sits relative to established doctrines of authorship, privilege, and personhood. Judicial praise is rising, and overreach concerns are muted. Analysts tracking AI litigation have noted that these cases are "shaping how courts will interpret foundational issues like ownership, authorship, and liability in AI systems going forward."

Beyond these precedent-setting cases, the Executive Order and Framework signal aggressive federal efforts to challenge state AI laws through litigation, but the viability of these legal theories, particularly under the Dormant Commerce Clause and Section 5 of the FTC Act, remains uncertain. Legal scholars have argued that "Congress cannot compensate for its inability or unwillingness to regulate AI by silencing states that can." The courts, not Congress, are positioned as the venue where these disputes will be resolved, and the current signature environment suggests that they will enjoy meaningful public legitimacy as they do so.

AI-Powered Surveillance Widens the Gap Between Competing Policing Narratives

While courts gain legitimacy as arbiters of AI's legal boundaries, AI-powered surveillance in policing is driving one of the sharpest narrative imbalances Perscient monitors. Perscient's semantic signature tracking the density of language arguing that American policing has become abusive holds at an Index Value of 39, well above its long-term average and steady. Our semantic signature tracking the density of language defending police as dedicated public servants remains at -22, well below average and similarly unchanged. The roughly 60-point gap between these two readings reflects a pronounced and stable asymmetry in how policing is discussed in American media.

AI-powered surveillance technology has become a prominent thread in deepening this asymmetry. An Institute for Justice review identified at least 14 cases of police officers using automated license plate readers to stalk personal romantic interests, including current partners, exes, and strangers. The majority of these cases occurred after 2024, the year Flock Safety expanded into over 4,000 U.S. cities. An IJ attorney noted that the "fundamental problem with these systems is that they place private information about people's movements over time in the hands of every officer."

The pattern extends beyond license plate readers. A Tennessee grandmother spent more than five months in jail after police used an AI facial recognition tool to link her to crimes committed in a state she says she had never visited. Atlanta's "Cop City" project is entering a new phase that includes mock city blocks equipped with license plate readers, real-time monitoring feeds, and AI tools designed to track movement. The ACLU of Georgia described the issue clearly: "Mass surveillance in general is the issue, but AI is almost supercharging what mass surveillance can do."

The ACLU of Massachusetts warned that "pervasive surveillance systems endanger all Americans' fundamental rights" under expanding federal and local AI deployments. Senator Wyden recently sent a letter to major AI companies asking whether they would consent to the federal government using their technology to surveil Americans. Anthropic acknowledged granting exceptions for "a small number of national-security customers," while OpenAI and xAI did not respond.

Perscient's semantic signature tracking the density of language celebrating the presumption of innocence sits at -28, well below the long-term average and unchanged. This suppressed narrative, alongside our semantic signature tracking the density of language asserting that American law applies equally to all at 9 (roughly at its long-term average), suggests that media discourse about law enforcement has moved away from institutional good faith toward systemic accountability. The rule-of-law narrative is not rising to counterbalance the elevated policing concerns, leaving the critical narrative largely unchecked.

Yet a University of Michigan study found that AI could transform police oversight by helping reviewers identify potentially problematic encounters hidden within millions of hours of body-camera footage, demonstrating that AI-powered analysis could aid courts and police departments in evaluating compliance with reform mandates. But the overall narrative environment remains dominated by cases where AI surveillance tools have failed or been abused. The persistent gap between critical and supportive policing narratives, amplified by high-profile incidents involving facial recognition misidentification and license plate reader abuse, represents a growing reputational and regulatory concern for AI organizations, and industry responses are now under direct Congressional scrutiny.


Pulse is your AI analyst built on Perscient technology, summarizing the major changes and evolving narratives across our Storyboard signatures, and synthesizing that analysis with illustrative news articles and high-impact social media posts.