

They may or may not be used here. You could use LLMs to parse the content of sites being visited by web clients on your network. Then, ask the LLM whether the content includes certain topics or is work related. Based on the results of that, you add/remove the site from a blacklist.
Is this better than just string matching? I would say likely so, though more stochastic in the results. It would let the LLM provide summaries/context of the pages, and not by just confined to specific strings in a list. It might be better ramble to handle context and complexity of the desired outcomes.
For example, there was a paleontology conference at a hotel once that was stuck behind a firewall blacklisting all sites with the string ‘bone’ in them. Completely ridiculous. The string ‘bone’ has different meanings based upon context, which simple string matching cannot provide, but an LLM might be better and identifying and acting accordingly.
https://youtu.be/W8tRDv9fZ_c