Or something that goes against the general opinions of the community? Vibes are the only benchmark that counts after all.

I tend to agree with the flow on most things but my thoughts that I’d consider going against the grain:

  • QwQ was think-slop and was never that good
  • Qwen3-32B is still SOTA for 32GB and under. I cannot get anything to reliably beat it despite shiny benchmarks
  • Deepseek is still open-weight SotA. I’ve really tried Kimi, GLM, and Qwen3’s larger variants but asking Deepseek still feels like asking the adult in the room. Caveat is GLM codes better
  • (proprietary bonus): Grok 4 handles news data better than GPT-5 or Gemini 2.5 and will always win if you ask it about something that happened that day.
  • SmokeyDope@lemmy.worldM
    link
    fedilink
    English
    arrow-up
    2
    ·
    13 days ago

    Thank you for the engaging discussion hendrik its been really cool to bounce ideas back and forth like this. I wanted to give you a thoughtful reply and it got a bit long so have to split this up for comment limit reasons. (P1/2)

    Though in both the article you linked and in the associated video, they clearly state they haven’t achieved superposition yet. So […]

    This is correct. It’s not a fully functioning quantum computer in the operational sense. It’s a breakthrough in physical qubit fabrication and layout. I should have been more precise. My intent wasn’t to claim it can run Shor’s algorithm, but to illustrate that we’ve made more progress on scaling than one might initially think. The significance isn’t that it can compute today but that we’ve crossed a threshold in building the physical hardware that has that potential. The jump from 50-100 qubit devices to a 6,100-qubit fabric is a monumental engineering step. A proof-of-principle for scaling, which remains the primary obstacle to practical quantum computing.

    By the way, I think there is AI which doesn’t operate in a continuous space. It’s possible to have them operate in a discrete state-space. There are several approaches and papers out there.

    On the discrete versus continuous AI point, you’re right that many AI models like Graph Neural Networks or certain reinforcement learning agents operate over discrete graphs or action spaces. However, there’s a crucial distinction between the problem space an AI/computer explores and the physical substrate that does the exploring. Classical computers at their core process information through transistors that are definitively on or off binary states. Even when a classical AI simulates continuous functions or explores continuous parameter spaces, it’s ultimately performing discrete math on binary states. The continuity is simulated through approximation usually floating point.

    A quantum system is fundamentally different. The qubit’s ability to exist in superposition isn’t a simulation of continuity. It’s a direct exploitation of a continuous physical phenomenon inherent to quantum mechanics. This matters because certain computational problems, particularly those involving optimization over continuous spaces or exploring vast solution landscapes, may be naturally suited to a substrate that is natively continuous rather than one that must discretize and approximate. It’s the difference between having to paint a curve using pixels versus drawing it with an actual continuous line.

    This native continuity could be relevant for problems that require exploring high-dimensional continuous spaces or finding optimal paths through complex topological boundaries. Precisely the kind of problems that might arise in navigating abstract cognitive activation atlas topological landscapes to arrive at highly ordered, algorithmically complex factual information structure points that depend on intricate proofs and multi-step computational paths. The search for a mathematical proof or a novel scientific insight isn’t just a random walk through possibility space. It’s a navigation problem through a landscape where most paths lead nowhere, and the valid path requires traversing a precise sequence of logically connected steps.

    Uh, I think we’re confusing maths and physics here. First of all, the fact that we can make up algorithms which are undecidable… or Goedel’s incompleteness theorem tells us something about the theoretical concept of maths, not the world. In the real world there is no barber who shaves all people who don’t shave themselves (and he shaves himself). That’s a logic puzzle. We can formulate it and discuss it. But it’s not real. […]

    You raise a fair point about distinguishing abstract mathematics from physical reality. Many mathematical constructs like Hilbert’s Hotel or the barber paradox are purely conceptual games without physical counterparts that exist to explore the limits of abstract logic. But what makes Gödel and Turing’s work different is that they weren’t just playing with abstract paradoxes. Instead, they uncovered fundamental limitations of any information-processing system. Since our physical universe operates through information processing, these limits turn out to be deeply physical.

    When we talk about an “undecidable algorithm,” it’s not just a made-up puzzle. It’s a statement about what can ever be computed or predicted by any computational system using finite energy and time. Computation isn’t something that only happens in silicon. It occurs whenever any physical system evolves according to rules. Your brain thinking, a star burning, a quantum particle collapsing, an algorithm performing operations in a Turing machine, a natural language conversation evolving or an image being categorized by neural network activation and pattern recognition. All of these are forms of physical computation that actualize information from possible microstates at an action resource cost of time and energy. What Godel proved is that there are some questions that can never be answered/quantized into a discrete answer even with infinite compute resources. What Turing proved using Gödel’s incompleteness theorem is the halting problem, showing there are questions about these processes that cannot be answered without literally running the process itself.

    It’s worth distinguishing two forms of uncomputability that constrain what any system can know or compute. The first is logical uncomputability which is the classically studied inherent limits established by Gödelian incompleteness and Turing undecidability. These show that within any formal system, there exist true statements that cannot be proven from within that system, and computational problems that cannot be decided by any algorithm, regardless of available resources. This is a fundamental limitation on what is logically computable.

    The second form is state representation uncomputability, which arises from the physical constraints of finite resources and size limits in any classical computational system. A classical turing machine computer, no matter how large, can only represent a finite discrete number of binary states. To perfectly simulate a physical system, you would need to track every particle, every field fluctuation, every quantum degree of freedom which requires a computational substrate at least as large and complex as the system being simulated. Even a coffee cup of water would need solar or even galaxy sized classical computers to completely represent every possible microstate the water molecules could be in.

    This creates a hierarchy of knowability: the universe itself is the ultimate computer, containing maximal representational ability to compute its own evolution. All subsystems within it including brains and computers, are fundamentally limited in what they can know or predict about the whole system. They cannot step outside their own computational boundaries to gain a “view from nowhere.” A simulation of the universe would require a computer the size of the universe, and even then, it couldn’t include itself in the simulation without infinite regress. Even the universe itself is a finite system that faces ultimate bounds on state representability.

    These two forms of uncomputability reinforce each other. Logical uncomputability tells us that even with infinite resources, some problems remain unsolvable. State representation uncomputability tells us that in practice, with finite resources, we face even more severe limitations there exist true facts about physical systems that cannot be represented or computed by any subsystem of finite size. This has profound implications for AI and cognition: no matter how advanced an AI becomes, it will always operate within these nested constraints, unable to fully model itself or perfectly predict systems of comparable complexity.

    We see this play out in real physical systems. Predicting whether a fluid will become turbulent is suspected to be undecidable in that no equation can tell you the answer without simulating the entire system step by step. Similarly, determining the ground state of certain materials has been proven equivalent to the halting problem. These aren’t abstract mathematical curiosities but real limitations on what we can predict about nature. The reason mathematics works so beautifully in physics is precisely because both are constrained by the same computational principles. However Gödel and Turing show that this beautiful correspondence has limits. There will always be true physical statements that cannot be derived from any finite set of laws, and physical questions that cannot be answered by any possible computer, no matter how advanced.

    The idea that the halting problem and physical limitations are merely abstract concerns with no bearing on cognition or AI misses a profound connection. If we accept that cognition involves information processing, then the same limits which apply to computation must also apply to cognition. For instance, an AI with self-referential capabilities would inevitably encounter truths it cannot prove within its own framework, creating fundamental limits in its ability to represent factual information. Moreover, the physical implementation of AI underscores these limits. Any AI system exists within the constraints of finite energy and time, which directly impacts what it can know or learn. The Margolus-Levitin theorem defines a maximum number of quantum computations possible given finite resources, and Landauer’s principle tells us that altering the microstate pattern of information during computation has a minimal energy cost for each operational step. Each step in the very process of cognitive thinking and learning/training has a real physical thermodynamic price bounded by laws set by the mathematical principles of undecidability and incompleteness.