TFO Winder

  • 2 Posts
  • 74 Comments
Joined 1 year ago
cake
Cake day: February 6th, 2024

help-circle



  • Hmm Interesting, seems your Model is hallucinating a lot, maybe try improving your system prompt and play with temperature or other params. I have a deepseek-ai_DeepSeek-R1-Distill-Qwen-1.5B Running locally. Here is my output for the Orignal Comment

    The article discusses the perception of a book as trash due to the author’s use of LLM without informing readers. The author highlights the benefits of using LLMs, such as saving time and improving productivity, by suggesting that using locally deployed LLMs can significantly enhance the summarization process. The article also warns against the misuse of LLMs, emphasizing the importance of verifying information. The author concludes that relying on LLMs can surpass human capabilities when combined with personal knowledge. Key Details: The article’s main points are:

    1. The book is perceived as trash due to LLM usage without reader notification.
    2. LLMs save time and improve productivity.
    3. Using locally deployed LLMs for summarization is effective.
    4. Misuse of LLMs can lead to false information.
    5. Combining LLMs with personal knowledge enhances quality.

    I use the following prompt before article

    You are a concise summarization AI. Follow these rules:

    • NEVER exceed 4 sentences or 150 words.
    • Use this format:
      “Summary: [2-sentence core idea].
      Key Details: [3–4 bullet points].”
    • Omit examples, disclaimers, or fluff.



  • Okay so hear me out on this. The book mentioned in this article is definitely a trash, the author used LLM without informing readers, which is why most people feel the are being scammed and express feelings of frustration and hate.

    I personally have deployed LLMs on my local machines and used them for variety of things such as Summarize news and Articles, Coding, Image Generation, etc and I have to be honest it is really really impressive technology. Any author who takes assistance from LLM would be hyper-productive compared to someone who does all the labour themselves. I used to take hours to read a broad area of knowledge and then deep dive in intrested topics. When LLMs generate summary and you can decide weather to read the source yourself or not is a big time saver and productivity boost. Of course this can be abused by someone who trusts LLMs too much and don’t again verifies what they read, it can give false information but that’s not how they are supposed to be used.

    These language models are really good at creating summaries. I use a locally deployed LLM to read summaries of Articles and then if I feel interested I read the entire article end to end from original source. In Almost every case the summary is spot on without it missing any important points or topics, heck I have created system prompt so that it tries to give hot takes and nuanced perspectives from the article and it impresses me sometimes giving me a new perspective which I would have not thought otherwise.

    I am convinced that using LLM along with your own knowledge always surpasses the quality of your work if someone with your same capability generates work without taking assistance of a LLM.









  • TFO Winder@lemmy.mltoTechnology@lemmy.worldYour TV Is Spying On You
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    12 days ago

    HEC feature enables IP-based applications over HDMI and provides a bidirectional Ethernet communication at 100 Mbit/s

    I think the bandwidth is too slow for HD/4K Streams.

    I am sure the 100 Mbit/s must also be theoretical maximum, i would be impressed if practical cables supports even half the orignal specs