Jay Roldan, PhD

Robotics and thoughtful system design

I’m a roboticist and systems thinker focused on designing systems that behave safely and predictably in the real world. Over more than a decade developing surgical robotic platforms, my work has spanned system architecture, motion control, task planning, and safety-critical software — translating advanced research into dependable products used in high-stakes environments.

  • When Should a Surgical Robot Say “No”?

    As surgical robots move closer to clinical autonomy, one essential capability remains underdeveloped: the ability for a system to understand its own limits and to act on that understanding.

    In the operating room, a millimeter matters. Small deviations can accumulate into meaningful risk, and uncertainty is not an edge case, it is the norm. In this context, autonomy without self-awareness is brittle. What surgical robotics increasingly demands is not just better execution, but better judgment.

    This is where capability-aware robotics becomes foundational.

    Knowing What You Can and Cannot Do

    Capability-aware robotics refers to systems that can model, assess, and reason about what they are able to do safely under current conditions. This goes beyond basic constraint checking or fault detection. A capability-aware system asks a more fundamental question before acting:

    Given my current state, environment, and task demands, should I proceed at all?

    Answering that question requires multiple layers of awareness: physical constraints such as joint limits and workspace reach; functional limits tied to accuracy, force, or sensing; contextual understanding of task risk; and collaborative awareness of when control should shift back to a human operator.

    This reframes autonomy away from execution alone and toward judgment under uncertainty.

    A Longstanding Idea, Revisited

    The roots of capability awareness run deep in robotics. Long before surgical autonomy was even conceivable, roboticists were already grappling with limits.

    Manipulability measures, dexterity indices, and singularity analysis were early attempts to formalize what a robot could reliably do in a given configuration. These tools helped engineers avoid regions where control authority degraded or motion became ill-conditioned. The underlying principle was simple but profound: just because a path exists does not mean it should be used.

    What has changed is not the idea, but the context. Surgical robots operate in dynamic, partially observed, safety-critical environments where failure is not a simulation artifact but a clinical outcome. Capability awareness today must extend beyond geometry into real-time assessment of uncertainty, confidence, and risk.

    Task-Appropriate Autonomy

    A practical consequence of capability awareness is task-appropriate autonomy, assigning the right responsibilities to the right agent at the right time.

    Robots excel at precision, repeatability, and stability. Humans excel at contextual reasoning, ambiguity resolution, and ethical judgment. In surgery, this naturally suggests a hybrid approach: let robots handle repetitive, high-precision subtasks, while surgeons retain authority over decisions that require interpretation or adaptation.

    A capability-aware system makes this division explicit. Rather than assuming autonomy is always preferable, the robot continuously evaluates whether it is the appropriate agent to proceed. When confidence drops below a safe threshold, the system adapts by slowing down, requesting input, or handing control back to the surgeon.

    Importantly, this is not a failure mode. It is a designed behavior.

    Where This Matters Most

    The value of capability awareness becomes clearest in edge cases—the situations that are hardest to script and most dangerous to ignore.

    Consider surgical registration. A robot may be able to compute a transformation from collected data, but that does not mean the data is sufficient or well-conditioned. A capability-aware system would recognize when point distribution, noise, or geometry make the result unreliable and would inform the user, rather than proceeding silently.

    Similar patterns appear in tool tracking, force-sensitive manipulation, or vision-guided tasks under occlusion. In each case, the robot’s ability to say “I don’t have enough confidence to proceed safely” is as important as its ability to act when conditions are favorable.

    These moments are where trust is earned or lost.

    Industry Implications

    For medical device companies and technical leaders, capability awareness is more than an academic concept.

    Systems that can explain their limits are easier to validate, easier to regulate, and easier for clinicians to trust. Transparent self-assessment aligns naturally with regulatory expectations around risk management and explainability. From a product perspective, capability-aware autonomy scales better across procedures and patient variability than brittle, rule-heavy automation.

    Perhaps most importantly, it reframes autonomy as a partnership rather than a replacement. Surgeons are not asked to trust a black box, but to work with a system that understands when to step forward and when to step back.

    Looking Ahead

    Autonomy in surgery will not arrive all at once. It will emerge gradually, task by task, constrained by safety, regulation, and human acceptance. Capability-aware robotics provides a framework for navigating that path responsibly.

    The most trustworthy surgical robots of the future will not be those that attempt the most, but those that understand their limits the best.

    And sometimes, the safest action a robot can take is to say “no.”