• 0 Posts
  • 77 Comments
Joined 2 years ago
cake
Cake day: June 13th, 2023

help-circle



  • “A way out west there was this fella, fella I want to tell you about, fella by the name of Jeff Lebowski. At least, that was the handle his lovin’ parents gave him, but he never had much use for it himself. This Lebowski, he called himself the Dude. Now, Dude, that’s a name no one would self-apply where I come from. But then, there was a lot about the Dude that didn’t make a whole lot of sense to me. And a lot about where he lived, like-wise. But then again, maybe that’s why I found the place s’durned innarestin’.”




  • You can. It’s incredibly generous of you to do that!

    You’ll need to know his account number and where the payment gets sent. It might be difficult to get this information quietly though.

    Usually you’d want to call the company that is servicing the loan and ask them to calculate a payoff amount as of a specific date. That way when you make your payment, all outstanding interest is paid off as well as the balance.

    That company may not give you this information depending upon laws and policies. Since you’re not the person they’re contracted with for the loan, it’s possible that they won’t. I don’t expect that to be the case though.

    I used to be a loan officer / underwriter, but I never dealt with student loans which have different laws governing them, so this is the best info I have to share (it’s also been 15 years since I did that work). I used to give payoff amounts all the time to other people and organizations though. People refinance their loans all the time, so it’s very common to give out that payoff calculation when the requester has the account number and name of the loan account holder.









  • Parrots can mimic humans too, but they don’t understand what we’re saying the way we do.

    AI can’t create something all on its own from scratch like a human. It can only mimic the data it has been trained on.

    LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent. They can’t think. They just know that which next word or sentence is probably right and they string things together this way.

    If you ask ChatGPT a question, it analyzes your words and responds with a series of words that it has calculated to be the highest probability of the correct words.

    The reason that they seem so intelligent is because they have been trained on absolutely gargantuan amounts of text from books, websites, news articles, etc. Because of this, the calculated probabilities of related words and ideas is accurate enough to allow it to mimic human speech in a convincing way.

    And when they start hallucinating, it’s because they don’t understand how they sound, and so far this is a core problem that nobody has been able to solve. The best mitigation involves checking the output of one LLM using a second LLM.