LLM should not be used where absolute accuracy is desired. Text summarization is one such example. No matter how you tweak the prompt to prevent hallucination (e.g., telling the LLM to only refer to the text in this section but not in that section), hallucination still happens, ALL THE TIME. Sometimes the prompt itself becomes the source of hallucination, and it simply piles more headache on top of old headaches.
Gen AI may be a good tool to generate image, videos, or music (let's not consider the copyright infringement problem at the moment), because these outcomes do not require absolute accuracy -- our brain has all sorts of tricks to filter out the error. Yet to apply Gen AI on things that require absolute accuracy is a fool's errand.
One final thought on text summarization. The whole point of AI is to automate the mundane stuff so that humans can engage in the creative tasks. Since when does text summarization become a "mundane" task that needs to be automated? Summarization is a highly creative task that requires one to not only comprehend the source material but also to create, out of thin air, simplified version that captures its gist. It should never be automated. Yet here we are, with so many companies rolling out Gen AI products to automate text summarization. We are using Gen AI on the wrong thing!