stravanasu

  • 4 Posts
  • 48 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle


  • They can be useful, used “in negative”. In a physics course at an institution near me, students are asked to check whether the answers to physics questions given by an LLM/GPT are correct or not, and why.

    On the one hand, this puts the students with their back against the wall, so to speak, because clearly they can’t use the same or another LLM/GPT to answer, or they’d be going in circles.

    But on the other hand, they actually feel empowered when they catch the errors in the LLM/GPT; they really get a kick out of that :)

    As a bonus, the students see for themselves that LLMs/GPTs are often grossly or subtly wrong when answering technical questions.









  • lack of global hotkeys in Wayland, graphics tablet support issues, OBS not supporting embedded browser windows, Japanese and other foreign as well as onscreen keyboard support issues that are somehow worse than on X11, no support for overscanning monitors or multiple mouse cursors, no multi-monitor fullscreen option, regressions with accessibility, inability of applications to set their (previously saved) window position, no real automation alternative for xdotool, lacking BSD support and worse input latency with gaming.

    All things that don’t matter to modern users.








  • these autonomous agents represent the next step in the evolution of large language models (LLMs), seamlessly integrating into business processes to handle functions such as responding to customer inquiries, identifying sales leads, and managing inventory.

    I really want to see what happens. It seems to me these “agents” are still useless in handling tasks like customer inquiries. Hopefully customers will get tired and switch to companies that employ competent humans instead…






  • Title:

    ChatGPT broke the Turing test

    Content:

    Other researchers agree that GPT-4 and other LLMs would probably now pass the popular conception of the Turing test. […]

    researchers […] reported that more than 1.5 million people had played their online game based on the Turing test. Players were assigned to chat for two minutes, either to another player or to an LLM-powered bot that the researchers had prompted to behave like a person. The players correctly identified bots just 60% of the time

    Complete contradiction. Trash Nature, it’s become only an extremely expensive gossip science magazine.

    PS: The Turing test involves comparing a bot with a human (not knowing which is which). So if more and more bots pass the test, this can be the result either of an increase in the bots’ Artificial Intelligence, or of an increase in humans’ Natural Stupidity.