No, it’s getting in a new relationship before leaving the previous one.
I try to contribute to things getting better, with sourced information, OC and polite rational skepticism.
Disagreeing with a point ≠ supporting the opposite side, I support rationality.
Let’s discuss to make things better sustainably.
Always happy to question our beliefs.
No, it’s getting in a new relationship before leaving the previous one.
Kobo was bought by Rakuten in 2012, Rakuten is the Japanese Amazon, except it failed to fully scale internationally.
Something tells me that Nicole is monkey-branching…
The abstract of the scientific article
In the relentless pursuit of quantum computational advantage, we present a significant advancement with the development of Zuchongzhi 3.0. This superconducting quantum computer prototype, comprising 105 qubits, achieves high operational fidelities, with single-qubit gates, two-qubit gates, and readout fidelity at 99.90%, 99.62%, and 99.13%, respectively. Our experiments with an 83-qubit, 32-cycle random circuit sampling on the Zuchongzhi 3.0 highlight its superior performance, achieving 1×106 samples in just a few hundred seconds. This task is estimated to be infeasible on the most powerful classical supercomputers, Frontier, which would require approximately 5.9×109 yr to replicate the task. This leap in processing power places the classical simulation cost 6 orders of magnitude beyond Google’s SYC-67 and SYC-70 experiments [Morvan et al., Nature 634, 328 (2024)], firmly establishing a new benchmark in quantum computational advantage. Our work not only advances the frontiers of quantum computing but also lays the groundwork for a new era where quantum processors play an essential role in tackling sophisticated real-world challenges. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.134.090601
Random circuit sampling is a problem designed to showcase quantum computing strength. Random circuit sampling is the simulation of the outcome of many randomly generated quantum circuits. So, having a computer based on quantum phenomenon, such as superposition and entanglement, is obviously a big help, as opposed to having to imperfectly simulate this on a classical computer. So much that classical super computer cannot simulate this problem in a reasonable human time anymore. They call this “quantum superiority”.
It’s like giving a math problem to a math professor and a philosophy professor, and then demonstrating how much better the math professor was at solving this problem.
But it’s a good benchmark to compare quantum computers between them.
Overall, it’s still useless to the average server or gamer.
Even quantum computing, which operates on superposition, ultimately collapses to definite states when observed—the underlying physics differs, but the principle remains: given identical initial conditions, identical outcomes follow.
I think this is incorrect, it does collapse to definitive state when observed, but the value of the state is probabilistic. We make it deterministic by producing s large number of measurements and deciding on a test on the statistical distribution of all the measurement to get a final value. Maybe our brain also does a test on a statistic of probabilistic measurements, or maybe it doesn’t and depends directly on probabilistic measurements, or a combination of both.
we just lack perfect information about initial conditions.
We also lack fully proven equations or complete resolution of equations in fluid dynamics.
I think parsimony is very much based on personal opinion at this point of knowledge.
The good old original “AI” made of trusty if
conditions and for
loops.
There are various independent reproducible measurements that give weight to the hot big bang theory as opposed to other cosmological theories. Are they any for the deterministic nature of humans?
Quantum physic is not deterministic, for example. While quantum decoherence explains why macro physical systems are deterministic, can we really say it couldn’t play a role in our neurons?
On a slightly different point, quantum bits are not binary, they can represent a continuous superposition of multiple states. Why would our mind be closer to binary computing rather than quantum computing?
As I suggested above, I would say creating a coherent idea or link between ideas that was not learned. I guess it could be possible to create an algorithm to estimate if the link was not already present in the learning corpus of an ML model.
From a truly scientific standpoint, we are machines built with organic matter. Our ones and zeros are the same as the machines we create, we just can’t deal with the fact that we aren’t as special as we like to think. We derive meaning from our individuality, and to lose that would mean that we aren’t individual. However, we are deterministic.
Would you have some scientific sources about the claim that we think in binary and that we are deterministic?
I think you may be conflating your philosophical point of view with science.
I don’t disagree with your definition, but I’m not sure what it changes in the point of current LLMs lacking human creativity. Do you think there isn’t anything more than a probabilistic regurgitation in human creativity so LLM already overcome human creativity, and it’s just a matter of consideration?
With historian work, I think it’s possible to say this idea appeared at about this point in time and space, even if it was refined by many previous minds. For example, you can tell about when an engineering invention or an art style appeared. Of course you will always have a specialists debate about who was the actual pioneer (often influenced by patriotism), but I guess we can at least have a consensus of when it starts to actually impact the society.
Also, maybe we can have an algorithm to determine if a generated result was part of the learning corpus or not.
Yes, it’s different in the creative aspect, but it’s similar in the job loss aspect.
Indeed, is his opinion based on the way the current technology works by regurgitating, or is it based on the loss of creative jobs?
So would his stance change if we move past basic llms and have models that can generate coherent innovative ideas that were not learned?
I think what we need to protect is the quality of life rather than the jobs. I wish for a 20h work week at the same QoL.
Once they actually produce great games, you’ll probably want to play them. People didn’t stop buying products because they were made by machines instead of artisans.
Has any of that happened on the average Arch in the past years? The only thing I have seen is an email once or twice a year asking to run a manual operation to fix a package migration.
Rakuten is a big mess of data tracking and advertisment but I’m glad to hear Kobo remains a good product.