Especially for technical documentation matters. 100% of links are old or just hallucinations.

  • onehundred@lemm.ee
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    15 hours ago

    For product or experience based things not typically. For example if I was researching which space heater to buy and I asked it to summarise people’s experiences of the best three space heaters it normally comes back with a few reviews from random blogs and a couple of Reddit links, none of which have ever caused me a problem.

    If I asked it about a very specific problem I was having with an Azure data factory pipeline. It might come back with some Microsoft documentation. They generally don’t work and the links to said documentation are complete hallucinations, which is quite ironic, given Microsoft’s massive investment in the company.

    I have done some research on a recent workplace dispute using the deep research feature and I have to say I found it to be reasonably good in the sources it choose to go with and they were all valid.

    I know I sound like a OpenAI shill but, it’s generally been quite good for me in recent memory. Apart from referencing technical documentation specifically for Microsoft products.

    This is of course my personal opinion, and they’re like arseholes, everyone’s got one.

  • deadcatbounce@reddthat.com
    link
    fedilink
    arrow-up
    1
    ·
    13 hours ago

    Which model? I find the 4o to be rubbish; I just get bullshit. 4-turbo was great. 4.5 is the only useful model there at the moment. Perplexity is what I use for searches and it’s good on references. ChatGPT for making things.

  • toadjones79@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    16 hours ago

    I’ve only had a couple of them not work. But I always ask for links to save on data. I think it has to do with what you are researching. Which is hard to predict.

  • TranquilTurbulence@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    17 hours ago

    Yes. Often I find an article that doesn’t answer my question, but it could be close. For example, it could be an article about porcelain manufacturing in the soviet union in general, but won’t answer my specific question about manufacturing porcelain for the electrical grid. That sort of stuff happens all the time, and GPT confidently claims something that isn’t supported by the sources it cites.

    Actually, it’s a lot like search results in general. The first 10 results could lead you in the right direction, but won’t have exactly what you’re after.