0%
Still working...

It’s Not Optimal – Public Books


Optimization is venerated today and broadly put into practice. Yet its conceptual origins come from an unlikely source, and its value is not at all self-evident.

“Optimize” first appeared in an 1817 issue of a literary magazine called The Examiner. The magazine’s cofounder, Leigh Hunt, was reviewing the debut collection of an aspiring writer, John Keats. Titled Poems, Keats’s work had been widely panned by the literary world as undisciplined and indulgent. Hunt, however, was a fan. “Here is a young poet,” Hunt extolled, “giving himself up to his own impressions revelling in real poetry for its own sake.

Poems deviated from the style of the time, which Hunt found sterile and rigid. Here, Hunt opined that “Poetry, like Plenty, should be represented with a cornucopia, but it should be a real one; not swelled out and insidiously optimized at the top, like Mr. Southey’s stale strawberry baskets, but fine and full to the depth, like a heap from the vintage.”

Poetry had lost its heart, Hunt argued, optimizing form at the cost of real feeling. In contrast, Keats’s Poems overflowed.

It turns out, human excesses may be poised for revival.

Distanced from its roots in literary criticism, optimization has become a core principle of mathematics and programming, a market good, and a normative commitment toward productivity gains. Defined by the best path to a target given resources and constraints, optimization is characterized by precision and efficiency; by maximized results without surplus or waste. Entire markets have built up around it. Startups, university programs, business consultancies, and content creators all buy and sell optimization, trading in promises of organizational attainment, continual growth, and the polish of personal excellence.

Of course, optimization’s promises also double as pressures in a world where time is short, demands are high, and successes are measured and weighed. Few are immune, including us—a pair of overscheduled academics passing drafts over email for efficiency’s sake. We weren’t always like this. At first, we shared slow meals and rich discussions, enraptured by the ideas behind a selection of newly published books we would read together, all about artificial intelligence (AI) and its societal effects. As sociologists who specialize in the area, we were eager to sift through the research, sort out our thoughts, and wade in the intellectual waters. This was especially the case as generative AI took hold, transforming all sectors of society and exploding our niche academic interest into a global conversation.

Yet as classes ramped up and deadlines approached, our meetings grew fewer and shorter. Writing squeezed into the cracks between competing professional demands. Track Changes and comment bubbles became our primary medium, streamlining discussion and debate. It was a practical workflow. Smart and succinct. Anyway, with AI’s rapid development and integration, the romances of intellect—meandering thoughts, playful premises, idle discussions, and wild ideas—were beginning to seem old fashioned. What was the point of such human excesses against the rise of precision machines?


It turns out, human excesses may be poised for revival. As society enters a transitional moment, one defined and disrupted by an apparent age of AI, there has been a collective pause to rethink social norms, practices, and values. The cultural sociologist Ann Swidler refers to such periods as “unsettled” times, rendering background conditions observable and vivid, while established standards are subject to change.

The moment of unsettle wrought by generative AI has created hype and panic, to be sure, but also studied statements about the human condition. We place four such statements in conversation: Allison Pugh’s The Last Human Job; Leslie Valiant’s The Importance of Being Educable, Ethan Mollick’s Co-Intelligence; and Verity Harding’s AI Needs You. Respectively, these works address interpersonal connection, teaching and learning, productivity practices, and policy spheres. Disparate in their subjects, methods, ideologies, and purpose, we find a surprising throughline in common: there is virtue in the good enough, value in frivolities, and diminishing returns on perfection. This is an ironic twist in AI’s long historical arc, with the spirit of optimization waning as conditions ripen for its crest.

It is the ability to apply external teachings in the absence of personal experience, facilitated by symbolic meaning systems, that distinguishes humans from other species.

Based on a multiyear study of US workers and workplace, Pugh’s The Last Human Job is set against a scene of quantification, standardization, and task automation at scale. At stake in Pugh’s analysis is the future of connective labor, or the skilled practice of perceiving, acknowledging, and reflecting back others’ thoughts and feelings—a blend of emotional labor and psycho-social recognition, as Pugh describes it.

Connective labor is a job requirement, skill set, and professional proficiency. It is also, Pugh tells us, the bedrock of social cohesion in a world teeming with work-related interactions. Be they between managers and staff, teachers and students, cashiers and customers, or doctors and patients; be these interactions long or fleeing, shallow or deep; be they consequential or mundane, our day-to-day rhythms follow a tune of transactional exchange.

These exchanges matter in the making of a life and of a society, and they are under threat by the cultural juggernaut of industrial logics.

Connective labor spans occupational sectors, as demonstrated by the variety of jobs represented in Pugh’s study, such as physician, driver, teacher, manager, and clergy. The workers she interviews all exercise emotional attunement, weaving a connective mesh that serves, but also exceeds, instrumental ends. Connective labor promotes student learning and patient compliance, for example, but it also does more than this. Connective labor brings dignity, purpose, and moral identity to workers themselves, while warming the tenor and tone of collective social life.

For Pugh, it is these intangible elements at risk of erosion with AI and automation. After all, connective labor is a path to professional goal objects, but certainly not the optimal one. It’s effortful, messy, and notoriously inaccurate. Connective labor is vast and vague, while data-driven machines offer uniform processes, precision outputs, and predictable results.

Yet Pugh warns against the trend of machinic displacement. Excesses are the point of connective labor, and inaccuracies inconsequential. More a sense than a science, people often misread one another’s thoughts and feelings, making “recognition” a woolly practice at best. This matters little for social cohesion. Incorrect guesstimates and the process of making them yield real payoffs for both the seer and the (mis)seen.

Connective labor is not just a task, Pugh says, nor is it reducible to measurable ends. It is an inalienable part of humanity itself, without which the social fabric is left threadbare and thin.

Pugh worries about AI. Valiant and Mollick, less so. Beginning with the premise of human adaptability, both authors—Valiant in The Importance of Being Educable, Mollick in Co-Intelligence—offer qualified enthusiasm and earnest instruction. Addressing the topics of learning and productivity, respectively, Valiant and Mollick seem prepared to lay pathways toward optimized ends. To some extent, they do lay those paths, but with significant bumps, curves, and diversions along the way. As each of their works proceed, excesses emerge in the authors’ theories and accounts, while precision and efficiency ebb and fade.

For Valiant, imprecision is embedded in his theory of learning, which rejects the essentialism of “intelligence” in favor a more malleable process: “educability.” The Importance of Being Educable is driven by a core concept—educability—and underpinned by three interrelated claims: 1) educability should displace intelligence as the framework for human cognition; 2) educability is a computational process that distinguishes humans from other animals; and 3) humans and machines are educable in similar ways, elevating the importance of both human curricula and machine learning training.


Valiant defines educability as the capacity to learn and grow via direct experience, external instruction, and the application of these sources to novel situations. It is the ability to apply external teachings in the absence of personal experience, facilitated by symbolic meaning systems, that distinguishes humans from other species. Valiant contends that this human learning process is paralleled in machines, whereby data are both acquired and input, instructions are programmable, and information is mobilized to navigate new environments. This link between human and machine learning motivates Valiant’s call for joint attention to human and machine pedagogies, advocating for serious attention to instructional materials and methods. None of this, however, is premised on strict accuracy.

Educable learning systems rely on approximations, or what Valiant calls probably approximately correct learning (PAC). This assumes that both people and machines get the gist of their lessons and apply those lessons in mostly relevant and correct-ish ways. Approximation itself need not conflict with optimization (in fact, optimization theory assumes and depends on approximation to manage inevitable uncertainties). What diverges here is Valiant’s celebratory stance. The allowance for error in his theory of educability is not a concession, but an irreplicable asset that sparks and enables expansive adaptability amid ever shifting parameters. It is an elemental piece of the highest cognition—essential for human, machine, and human-machine evolutions.

Equally interested in human-machine evolutions, Ethan Mollick’s Co-Intelligence is a slick guide to productive collaboration. He proffers four rules for working with AI, beginning with the imperative to include AI in all tasks (“always invite AI to the table”) and ending with an assertion of perpetual technological advance (“assume this is the worst AI you will ever use”). In between are instructions to “be the human in the loop” and to “treat AI like a person (but tell it what kind of person it is)”.

The book’s lithe writing, productivity focus, and the business background of its author all lend themselves to the logics of optimization—leveraging the power of AI to accelerate achievement. The work drifts from this, however, in Mollick’s personal anecdotes and research-driven accounts. AI often misses the mark, requires significant human input, and can add time to a project by creating dialogic exchanges that slow decisive acts. Such inefficiencies, inaccuracies, and compulsory vigilance are part of the human-AI landscape that Mollick describes, painting a picture that reads more capacious than lean.

Mollick might be rightly accused of wearing rose-colored glasses, focusing on production while glossing over social costs. The book barely touches on AI’s systemic disparities, for example, nor does it address the environmental risks of ever-advancing AI models. Yet it does accept, and at times delights in, the nebulous endpoints, clumsy progressions, and overarching uncertainties that eventuate in human-AI pairings. Such qualities stray from standard management mantras and their formulas for success, shrinking optimization as a mandate and drive.

At a societal level, an emphasis on human values over machine precision applies beyond individual undertakings, extending to AI governance. In crafting public policy, exact science often proves less valuable than achieving mutual understanding and collective buy-in. This is the crux of Verity Harding’s AI Needs You, which extracts lessons for the future of AI from past technological transformations, including the US Space Program and moon landing; the UK’s Warnock Report on in vitro fertilization (IVF); and the establishment of a global internet.

Throughout, Harding documents the tedious, painstaking, compromise-laden efforts to implement new technological systems into social and institutional infrastructures, and to govern those systems in ways that respect—even if imperfectly—a plurality of social, political, and ideological positions. Optimization falls away here, supplanted by an arduous slog that attends to emotions and relationships over accuracy and fact. Harding’s history of IVF is especially illustrative in this regard.

IVF, and embryonic research more generally, were fraught topics in the 1970s and 1980s. Scientists were developing new capabilities for reproductive intervention, applying medical advances to the genesis of life. Lines blurred between science and the sacred, stirring cultural debate about moral obligations, allowances, limits, and boundaries.

Sensitive to these debates, the UK government established the Warnock Committee in 1982 to formulate IVF policy. Chaired by the moral philosopher Mary Warnock and populated by health professionals, academics, social workers, and civil servants, this committee proposed a controversial rule that was socially effective but scientifically imprecise (some would say, unsound).

The 14-day rule at the center of IVF policy established a hard deadline after which embryonic research could no longer be performed, creating a distinction between cell matter and human life. Fourteen days was based on the point at which the “primitive streak” forms, constituting the individuation of an embryo and coinciding with the period from fertilization to final implantation. The problem, biologists argued, is that 14 days is an arbitrary indicator. Embryos may form a primitive streak earlier or later, just as implantation may finish on day 12 or 15. Warnock and the committee were aware of these facts yet proceeded with the temporal marker. What mattered was the marker itself, a policy setting that preserved the hallowed nature of life vis-à-vis scientific endeavors. What mattered, was a populace at ease.

Echoing Pugh’s theory of connective labor, Harding makes the case that publics need to feel recognized and understood in any legislative measure. The Warnock committee thus chose to time delimit embryonic research, acknowledging the social and spiritual stakes. Neither exact nor efficiently devised, this IVF policy endures today, facilitating both family planning and scientific developments.


In 1817, Leigh Hunt critiqued a literary fashion that that prioritized form over feeling. Though aimed at that which was optimized, his message readily applies to the notion of optimization itself. Hunt was bored by neat prose and obedient structures, preferring the luxuriant qualities of Keats.

Two hundred years later and fresh off the press, four books about a new age of AI tell stories of sluggish processes, ambiguous outcomes, emotionally charged issues, and generous margins for error. In telling these stories, Pugh, Valiant, Mollick, and Harding decenter optimization as an orienting force.

Contemporary optimization regimes use precision indicators to reach prescribed ends, streamlined through platforms and models. Hunt would be unmoved, and Keats far out of place. What if these literary figures, going against the 17th-century grain, were on to something that is again relevant today? They very well might have been. icon

This article was commissioned by Mona Sloane.



Source link

Recommended Posts