I am interested in learning how LLMs can understand requests requiring a "tool call".
In this post ["Tool Calling" and Ollama](https://k33g.hashnode.dev/tool-calling-and-ollama), there is a nice description of how "Tool calling" works with Ollama.
The idea of this feature is that LLMs can have access to some tools (aka external APIs) and can call them to get extra information. To be able to do this, the LLM has to understand the current request, determine that this request could be forwarded to a tool, and parse the arguments.
Here is a shorter example of the code from the origin