

if you have a soup of all liquids and a sieve that only lets coffee and ice cream through it produces coffee ice cream (metaphor, don’t think too hard about it)
that’s how gen ai works. each step sieves out raw data to get closer to the prompt.
if you have a soup of all liquids and a sieve that only lets coffee and ice cream through it produces coffee ice cream (metaphor, don’t think too hard about it)
that’s how gen ai works. each step sieves out raw data to get closer to the prompt.
Crack an egg on it
studies were done and found similar to what you’re saying.
also the secret listening does not comport with any of the business side of profile marketing, either. So it would have to be an incredibly well kept secret on top of all of that.
get you a wife who does both
the idea that reality is subjective is relatively new
uhhh the parable of the cave?
they do, it’s just written down. IPA isn’t that hard to learn.
I remember a friend of mine becoming a mother and saying, “Yeah the doctor said never to let her head go back too far and a split second later my daughter suddenly decided to fucking wrench their neck as far back as it could go”
Wouldn’t you just take issue with whatever the new name for it was instead? “Calling it pattern recognition is snake oil, it has no cognition” etc
what would they have to produce to not be snake oil?
Can you name a company who has produced an LLM that doesn’t refer to it generally as part of “AI”?
can you name a company who produces AI tools that doesn’t have an LLM as part of its “AI” suite of tools?
What are you talking about? I read the papers published in mathematical and scientific journals and summarize the results in a newsletter. As long as you know equivalent undergrad statistics, calculus and algebra anyone can read them, you don’t need a qualification, you could just Google each term you’re unfamiliar with.
While I understand your objection to the nomenclature, in this particular context all major AI-production houses including those only using them as internal tools to achieve other outcomes (e.g. NVIDIA) count LLMs as part of their AI collateral.
I’ve been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.
So why is the AI at the top end amazing yet everything we use is a piece of literal shit?
The answer is the chatbot. If you have the technical nous to program machine learning tools it can accomplish truly stunning processes at speeds not seen before.
If you don’t know how to do - for eg - a Fourier transform - you lack the skills to use the tools effectively. That’s no one’s fault, not everyone needs that knowledge, but it does explain the gap between promise and delivery. It can only help you do what you already know how to do faster.
Same for coding, if you understand what your code does, it’s a helpful tool for unsticking part of a problem, it can’t write the whole thing from scratch
I will nationalize your service for free to everyone
that might be more the advertiser needs to update their negs list
presumably when they’re in sub mode?
well actually I’ve taken to calling it “GNU+Linux”
The question was do I feel like I’ve reached the end of what the world has to offer.
No, and as an example of what I find enriching: music, books, TV, film, dance, poetry, news, science, games - other people would count sports… even if you find between 51–99.9% of entertainment of any and all kinds unenriching and unentertaining, then you still find some things meet that, and thus have not reached the end of everything in the world.
My point being with such hyperbole: no, no one has reached the end
frequently, yes. You can’t convince me you’ve never been entertained and enriched by entertainment
Please don’t reply all.
Consider the environment before printing this comment.