Judgment After Visibility: Creative – and Countercultural – Leadership in the Platform Era
Across four previous articles (summarized here: Slocum, 2026), I have argued that hyperconnected, data-driven, and algorithmically governed platforms are not merely communication channels but evaluative infrastructures. They shape what is seen, rewarded, and taken as evidence of leadership competence. Because these environments privilege immediacy, legibility, and emotional coherence, they subtly favor responsiveness over deliberation and articulation over accountability. My argument is that, over time, and as a result, leadership risks being redefined less as a craft exercised under uncertainty than as performance optimized for visibility.
This article examines the displacement of human judgment in the platform era. By “judgment,” I mean the seasoned capacity to act wisely and responsibly in volatile situations where certainty is absent, information is incomplete, and no established rulebook provides a clear path forward. Judgment draws on an ingrained “feel for the game,” developed through sustained personal and professional immersion, that enables leaders to improvise responsibly when signals are noisy, partial, or contradictory. Rather than the application of a single or superior logic or calculation, judgment is the reflexive bridge across the gaps where logic, data, and precedent inevitably falter.
Such a working definition exposes a persistent category error in much of contemporary (especially popular) leadership discourse, which treats the indeterminate world of human affairs as if it were a solvable optimization problem. Platforms and AI systems excel at identifying patterns, making predictions, and managing calculable variation, but judgment becomes decisive precisely where probability distributions collapse and responsibility cannot be straightforwardly delegated (Agrawal, Gans, & Goldfarb, 2018). The paradox is that judgment is increasingly required by organizational complexity even as the conditions for recognizing, exercising, and cultivating it are steadily eroded.
It is here, as we encounter a central paradox of the platform and AI era, that we can also appreciate how judgment should be understood as being at the heart of creative leadership. My contention is that “creative” leadership today refers not to novelty, self-expressiveness, or inimitability, but, instead, to the disciplined, heterodox capacity to depart from dominant evaluative and performative logics; that is, to see and act where prevailing systems of attention, measurement, and reward cannot. In this way, I want to claim that creative leadership is inherently “countercultural” – not in posture or personality, but in a refusal to optimize for visibility or adhere to prevailing cultures, logics, or systems at the cost of minimizing substantive impact.
I. The Quiet Displacement of Judgment
Judgment has not disappeared from leadership practice, of course, but it has been quietly displaced from leadership language. Popular leadership discourse today is confident in its vocabulary – think: purpose, courage, empathy, and authenticity – yet strikingly thin in its account of what leaders actually confront when information is incomplete, incentives misalign, and consequences unfold unevenly over time.
My claim here is that one of the key words, concepts, and capacities that are fading from mainstream leadership talk and practice in the platform era is judgment. The displacement matters not (only) because judgment is a kind of moral aspiration, but because without it, we risk losing a shared way of naming the act of differentiating primary from secondary considerations, choosing among imperfect options, and owning responsibility for outcomes that cannot be fully foreseen.
Historically, one could argue that judgment has long stood at the center of leadership, governance, and professional authority. Aristotle’s account of phronēsis located practical wisdom precisely in deliberation where rules and certainty fall short, emphasizing context-sensitive action rather than the mechanical application of general principles (Aristotle, 1999).
Two millennia later, and employing modern economic terms, economist Frank Knight produced a classic clarification of why such situations matter, when he distinguished risk, where outcomes can be assigned probabilities, from uncertainty, where they cannot (Knight, 1921/2022). Set down more than a century ago, this distinction remains foundational today because it marks the boundary beyond which calculation ceases to guide action and judgment must step in.
Pioneering systems scientist Sir Geoffrey Vickers extended this insight by emphasizing how judgment is an “appreciative system,” one that does not merely transmit facts between individuals but emerges from a joint system of communications between senders and receivers that assigns meaning and value to them (Vickers, 1965/1995).
French sociologist Pierre Bourdieu added a further dimension to this already social understanding by locating judgment within what he called habitus: the historically formed dispositions that allow actors to “know how to go on” within a field without explicit rules(Bourdieu, 1977). For readers unfamiliar with Bourdieu, the importance of habitus lies in showing why judgment is neither purely cognitive nor fully conscious; instead, it is embodied history made operative in the present.
What is striking today is not that leaders no longer exercise judgment, but that leadership discourse and practice have more and more difficulty naming it. As a result, judgment is displaced by traits and behaviors that are easier to display and affirm publicly. Resilience, authenticity, and emotional intelligence matter, yet they largely describe dispositions rather than the act of differentiating primary from secondary considerations and choosing among imperfect options with real stakes. While teams still need leaders who can arbitrate trade-offs, allocate attention, and sequence action, today’s increasingly quantified evaluative language tends to point elsewhere.
This displacement reflects a broader shift in evaluative regimes. As research on metrics and rankings has shown, systems of measurement reshape what organizations notice and reward, often crowding out professional judgment in favor of what is countable, comparable, and quickly surfaced (Muller, 2018). Platforms intensify this tendency because they are evaluative environments by design. They make visibility easy, reaction measurable, and coherence legible, while rendering slow discernment and delayed situational understanding comparatively invisible.
Approached this way, the disappearance of judgment from leadership discourse is less a research or developmental trend and more a structural effect of platform-based evaluation. Judgment has not vanished because it is obsolete, in other words, but because the environments in which leadership is assessed have narrowed the space in which judgment can be recognized. Beyond contributing to thinner leadership discourse, this shift erodes the potential of leaders to address complexity responsibly.
II. Discernment: Differentiating Signal from Noise
If judgment is the capacity to act wisely and responsibly under uncertainty, discernment is the precondition that makes such action possible. Discernment involves differentiating the essential from the secondary, signal from noise, and the salient (“what matters”) from the merely visible. It therefore governs attention before it governs action. Without discernment, judgment collapses into unthinking repetition, reaction, or paralysis.
In complex organizational settings, leaders are constantly confronted with more information than they can process. Organizational Behavior scholar William Ocasio recognized a central leadership implication of discernment three decades ago: firm behavior follows from how organizations channel and distribute attention among decision-makers, which means leadership is partly the governance of salience itself (Ocasio, 1997). In platform environments, where attention is continuously captured, redirected, and monetized in real-time, discernment becomes harder because of its long-term stakes.
Nobel Laureate Daniel Kahneman’s later work on noise deepens this diagnosis. While his more familiar writings about bias concern systematic directional error, noise refers to unwanted variability in judgment where similar cases receive dissimilar treatment. Consider the patient who presents the same symptoms to three separate doctors and receives three different diagnoses (Kahneman, Sibony, & Sunstein, 2021).
The Wells Fargo “unauthorized accounts” scandal illustrates how noise, in the form of internal metrics and incentives, can operate at an organizational level. In its September 8, 2016 Consent Order, the Consumer Financial Protection Bureau determined that the bank’s employees had opened more than two million unauthorized checking, deposit, and credit card accounts without consumers’ knowledge or consent, within a broader cross-selling practices regime that distorted behavior (Consumer Financial Protection Bureau, 2016).
The failure emerged because while senior leaders consistently communicated strong values around customer focus and ethics, they failed to understand that the performance metrics and incentive systems guiding their employees were encouraging misconduct at scale.
In platform settings saturated with dashboards, alerts, metrics, and algorithmic amplification, leaders face both a greater surplus of information and much more unstable interpretive conditions. Noise proliferates as judgments are made under time pressure, fragmented attention, and shifting frames. Discernment therefore becomes harder precisely because the environment generates the illusion of clarity while multiplying inconsistency.
This is where discernment differs from analytical intelligence or specific forms of logic. Instead of merely being a cognitive filter applied to inputs, discernment is a practiced capacity shaped by experience to recognize patterns, anomalies, and significance. Discernment is exercised over time, refined through exposure to consequences, and calibrated through feedback and learning that are often delayed.
Platform environments, however, typically reward immediate coherence and responsiveness, which can mask noise as signal and penalize hesitation as weakness. For that reason, the very conditions that make discernment necessary also make it harder to sustain.
III. Tacit Knowledge and the Strained Dialogue with the Explicit
Yet discernment is not only a cognitive filter; it is also an embodied competence rooted in experience and, crucially, in tacit knowledge. British-Hungarian philosopher Michael Polanyi’s trenchant claim, that “we can know more than we can tell,” points to a form of knowing that is difficult to articulate but central to skilled action (Polanyi, 1966/2009). Tacit knowledge resides in pattern recognition, contextual sensitivity, and embodied familiarity with a field.
A generation later, British education researcher Michael Eraut likewise showed how much professional competence is developed through informal and often invisible workplace learning, with tacit knowledge accumulating through observation, participation, and situated feedback rather than through formal instruction alone (Eraut, 2000). Their respective conclusions made clear why judgment may not be fully expressible at the speed platforms expect – or provable in the moment.
Organizational knowledge research reinforces this point. The late Hitotsubashi ICS professor Ikujiro Nonaka describes organizational knowledge creation as a “continual dialogue between explicit and tacit knowledge” (Nonaka, 1994, p. 15), thereby providing a useful reminder that what teams can write down is only part of what they know.
British sociologist Harry Collins further differentiates forms of tacit knowledge, showing why some aspects of skill and judgment resist conversion into explicit rules (Collins, 2010). Through this lens, judgment emerges from the ongoing interplay between articulated analysis and unarticulated experience. This dialogue allows teams to test formal models against lived reality, and to revise their understanding when the two diverge.
Platform environments, however, strain this dialogue. Because tacit knowledge is slower to surface, harder to justify in the moment, and often only validated retrospectively, it is systematically disadvantaged in settings that privilege speed, fluency, and immediate legibility. The push toward visibility compresses time for reflection and narrows the space in which experiential cues can be voiced without being dismissed as subjective or anecdotal.
For leadership, the implication of this varied research is unmistakable: judgment depends on tacit knowing, and today’s platform environments systematically devalue tacit knowing because it is slower to surface, harder to measure, and often only visible after consequences unfold.
The 2018 Boeing 737 MAX crisis illustrates how catastrophic such distortions can become when institutional judgment is compressed by competitive, organizational, and reputational pressures. Lion Air Flight 610 crashed on October 29, 2018 in the Java Sea shortly after takeoff from Soekarno-Hatta International Airport, Jakarta, Indonesia (NTSB, n.d.; KNKT, 2019). Early leadership decisions had involved complex trade-offs among safety, speed, cost, and competition of the planes. Internally, at Boeing, these choices were framed in compressed decision cycles, as technically justified and strategically necessary. Externally, company leadership communication to airline clients and the public emphasized confidence and reassurance.
Only after catastrophic consequences unfolded did the quality of those decisions become visible. The failure was not a lack of data or intelligence, but a breakdown in the dialogue between explicit models and tacit knowing under institutional pressure for speed, confidence, and continuity.
Viewed in this way, platforms do not eliminate tacit knowledge, but they do contract the space and time for vital dialogue and, with it, the conditions for sound judgment. Indeed, while judgment often involves holding competing interpretations in tension and resisting premature closure, algorithmic systems favor clarity, strong signals, and repeatable positions. Generative AI extends this logic by producing confident outputs that appear to settle uncertainty or ambivalence and, in the process, increasing the risk that teams converge too quickly around a plausible narrative and mistake fluency for reliability (Rosani, Farri, & Renecle, 2024).
IV. Visibility, AI, and the Compression of Judgment
While platforms displace judgment by privileging what can be made visible quickly, generative AI intensifies this displacement by altering how thinking itself is distributed. The danger emerging from AI’s accelerated decision-making is that it subtly reconfigures the relationship between speed, confidence, and responsibility. Where platforms reward responsiveness, AI supplies (seeming) coherence on demand, offering well-formed outputs that appear to resolve uncertainty even when the underlying situation remains indeterminate.
Kahneman’s classic distinction between fast, reactive System 1 thinking and slow, deliberative System 2 thinking remains useful here, but it may no longer be sufficient (Kahneman, 2011). As Wharton School researchers Steven Shaw and Gideon Nave argue, contemporary decision-making increasingly involves a third cognitive locus: artificial cognition that operates outside the human mind yet participates directly in reasoning (Shaw & Nave, 2026). This “System 3” does not merely assist human thinking; it can pre-empt it, suppress it, or substitute for it altogether. In doing so, AI use alters not only outcomes of thinking, but “the shape of human reasoning” and, therefore, the internal human experience of judgment itself.
Such a proposition matters for leadership because judgment depends on the disciplined interplay between intuition, deliberation, and experience over time. System 3 can short-circuit this interplay by offering answers that are fast, fluent, and apparently authoritative. Under conditions of time pressure, complexity, or cognitive fatigue, leaders may adopt AI-generated outputs with minimal scrutiny, a phenomenon that Shaw and Nave describe as “cognitive surrender.” As a result, he concern is ultimately less about the reliance on AI as a tool and more about the partial abdication of human responsibility for interpretation and consequence that that tool allows.
That concern is compounded by platform environments that already equate speed with competence. When AI-generated coherence aligns with platform incentives for immediacy, leaders face a double compression: less time to reflect and fewer cues signaling when reflection is necessary. Judgment, which often requires holding ambiguity open rather than closing it prematurely, becomes harder to recognize and, once recognized, harder to justify. The risk is not that AI makes leaders less intelligent, but that it encourages premature cognitive closure in situations where responsibility warrants patience, reflection, and continuing encounters with uncertainty.
Put more plainly, we should view judgment not as a static trait but as a developmental achievement. It emerges through repeated engagement with uncertainty and repeated exposure to consequences, and through cycles of interpretation, error, and correction. This is why judgment is inseparable from what organizational learning pioneer Donald Schön termed “knowing-in-action”: the capacity to respond intelligently in the midst of practice without relying on explicit rules alone (Schön, 1983). Since such knowing resides in the dynamic interaction between tacit understanding and explicit reasoning, it is most under pressure from the speed and compression that occasion AI use and mark platform logics.
V. Recovering Judgment Under Conditions of Visibility, Speed, and Compression
To recover judgment in these environments, where it has become both harder to exercise and even to recognize, requires more than exhortations to “slow down” or “step back.” It requires rebuilding organizational conditions that protect discernment, sustain dialogue between tacit and explicit knowledge, and resist the automatic privileging of speed and visibility. The challenge is not to reject or minimize interactions with AI or platforms, but to reassert human responsibility within hybrid cognitive systems.
A first move is to distinguish domains where speed is appropriate from those where it may present threats. Not all decisions benefit from deliberate delay, but many leadership judgments do, particularly those involving ethical trade-offs, long-term risk distribution, or irreversible consequences. Aligning evaluation systems with these distinctions is crucial. When leaders are rewarded primarily for prompt and precise responsiveness, judgment-oriented behaviors such as delaying action, seeking dissent, or reframing the problem are easily misread as weakness rather than competence.
Re-legitimating rich and varied experiences as developmental assets, especially experiences that include failure, reflection, and repair, is one way to support these behaviors. Because tacit knowledge is often formed in messy contexts where rules do not neatly apply, as we’ve seen, organizations that remove friction from work also risk removing some of the very conditions through which employees learn greater situational discernment. While the “friction-maxxing” currently in vogue in some organizations is not a universal answer, recognizing the potential advantages of introducing some friction – say, to slow the pace of thinking and enable the embrace of constructive complications – can be valuable.
A second move for leaders is to cultivate disciplined practices of self-correction that are social, not merely personal. Judgment improves through iterative calibration, such as noticing through dialogue where one’s interpretation is wrong, updating one’s salience map, and testing revised assumptions. By creating space for critical thinking, for the articulation of tacit cues, and for the contestation of premature coherence, leaders can help ongoing dialogue to become a practical infrastructure for discernment. In other words, beyond serving as the basis of explicit communication, substantive dialogue can become a method for making tacit knowledge shareable enough to be challenged without pretending it can be fully converted into explicit rules.
Besides interpersonal dialogue, a third, closely related move involves ensuring that leaders and others keep tacit knowledge in active dialogue with AI-generated outputs. This means treating System 3 not as an answer engine but as one of many provisional inputs whose value depends on human interpretation. Teams can institutionalize this stance by requiring humans to name the tacit cues that AI cannot “know” in context – like reputational stakes, informal norms, regulatory sensitivities, the politics of timing, the lived history of a team, and the risk distribution of a decision.
When teams learn to ask, explicitly, “What do we know here that we cannot fully explain?”, they recover some of the tacit dimension of the situational background as legitimate input to judgment. Doing so counteracts the cognitive surrender described by Shaw and Nave by reactivating reflective human judgment.
Finally, recovering judgment requires revaluing learning from near-misses and failures. Harvard Business School’s Amy Edmondson’s groundbreaking research on failure, learning, and psychological safety is useful here. She argues that organizations do not learn automatically from failure, but through deliberate practices that surface and interpret it through diagnosis, classification, and non-punitive inquiry. Leaders and organizations therefore need to “catch, correct, and learn” what is not working and what can be changed before others do and before failure scales (Edmondson, 2011).
In platform settings, leaders should therefore slow and emphasize feedback loops at key moments so that teams can pause to convert near-misses and small breakdowns into shared learning and re-direction rather than into defensiveness or silence. Over time, dialogue and feedback are precisely what sustain the long-term development of judgment, which depends on calibrated self-correction and ongoing open social interaction rather than flawless performance and polished narratives.
Taken together, these moves provide the makings of a strong creative and countercultural leadership capacity for judgment. They do so by constituting a disciplined refusal to collapse uncertainty too quickly – even when platforms and AI make doing so easy. Ultimately, that discipline depends on the continuing cultivation of the discernment and tacit knowledge that allows leaders to differentiate what matters from what merely moves, acknowledge knowing beyond rules and the visible, and then to decide and act upon that differentiation.
References
Ajay Agrawal, Joshua Gans, and Avi Goldfarb (2018) Prediction Machines: The Simple Economics of Artificial Intelligence, Harvard Business Review Press.
Aristotle (1999) Nicomachean Ethics, Terence Irwin, trans., 2nd ed., Hackett Publishing (Original work ca. 350 BCE); Internet Archive.
Pierre Bourdieu (1977) Outline of a Theory of Practice, Richard Nice, trans., Cambridge Studies in Social and Cultural Anthropology, Numbver 16, Cambridge University Press.
Harry Collins (2010) Tacit and Explicit Knowledge, University of Chicago Press.
Consumer Financial Protection Bureau (2016, September 8) Consent Order: In the Matter of Wells Fargo Bank, N.A. (2016-CFPB-0015); https://files.consumerfinance.gov/f/documents/092016_cfpb_WFBconsentorder.pdf
Amy C. Edmondson (2011, April) “Strategies for Learning from Failure,” Harvard Business Review.
Michael Eraut (2000) “Non-formal Learning and Tacit Knowledge in Professional Work,” British Journal of Educational Psychology, 70(1), 113–136; https://doi.org/10.1348/000709900158001
Daniel Kahneman (2011) Thinking, Fast and Slow, Farrar, Straus and Giroux.
Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein (2021) Noise: A Flaw in Human Judgment, Little, Brown Spark.
Frank H. Knight (1921/2022) Risk, Uncertainty, and Profit, Houghton Mifflin/bnpublishing.
KNKT (Indonesian National Transportation Safety Committee) (2019) Final aircraft accident investigation report: Lion Air flight 610, Boeing 737-8 (MAX), PK-LQP (KNKT.18.10.35.04); https://www.aaiu.ie/sites/default/files/FRA/2018%20-%20035%20-%20PK-LQP%20Final%20Report.pdf
Jerry Z. Muller (2018) The Tyranny of Metrics, Princeton University Press.
Ikujiro Nonaka (1994) “A Dynamic Theory of Organizational Knowledge Creation,” Organization Science, 5(1), 14–37; https://doi.org/10.1287/orsc.5.1.14
NTSB (National Transportation Safety Board) (n.d.) Investigation: Lion Air flight 610 (DCA19RA017); https://www.ntsb.gov/investigations/Pages/DCA19RA017-DCA19RA101.aspx
William Ocasio (1997) “Towards an Attention-based View of the Firm,” Strategic Management Journal, 18(S1), 187–206; https://doi.org/10.1002/(SICI)1097-0266(199707)18:1+<187::AID-SMJ936>3.0.CO;2-K
Michael Polanyi 1966/2009) The Tacit Dimension, Doubleday/University of Chicago Press.
Gabriele Rosani, Elisa Farri, and Michelle Renecle (2024, November 20) “To Mitigate Gen AI’s Risks, Draw on Your Team’s Collective Judgment,” Harvard Business Review.
Donald A. Schön (1983) The Reflective Practitioner: How Professionals Think in Action, Basic Books.
Steven D. Shaw and Gideon Nave (2026) “Thinking – Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender,” SSRN Working Paper; https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
David Slocum (2026, February 19) “From Judgment to Visibility: How Platforms Are Quietly Redefiniing What Leadership Means,” Crafting Leadership, Substack; https://www.craftingleadership.com/p/from-judgment-to-visibility-how-platforms
Geoffrey Vickers (1965/1995) The Art of Judgment: A Study of Policy Making, Sage Publications.



