Posted in

AskCLI – AI Assistant for Linux/Unix Shell.

As part of an experiment with the OpenAI API and its compatibility with various AI projects, I created a simple AI assistant that can be accessed directly in the Unix/Linux shell. After interacting with ChatGPT, I create a straightforward Python program that functions as a shell tool called “ask”.

Repository: https://github.com/kmkamyk/ask-cli

How Does It Work?

The tool requires a local AI model (e.g., Ollama, llama.cpp, LLMStudio) or an API from one of the commercial providers (OpenAI, Google Gemini, etc.). I tested it with Google Gemini, and it worked seamlessly. In my tests, the local Llama3.1 with 8B performed best, but you can experiment with other models like IBM Granit or Bielik.

You can also leave the configuration prompt blank (in /etc/ask/config.yml), turning the program into a classic assistant/chatbot.

A key component of the entire program is this simple function, which handles communication with the LLM:

def query_llm(prompt, config):
    """
    Send a query to the LLM and display the response.

    Args:
        prompt (str): User's input query.
        config (dict): Configuration dictionary for API and model.
    """
    client = openai.OpenAI(
        base_url=config["api"]["base_url"],
        api_key=config["api"]["api_key"],
    )
    try:
        completion = client.chat.completions.create(
            model=config["model"]["name"],
            messages=[
                {"role": "system", "content": config["model"]["system_prompt"]},
                {"role": "user", "content": prompt}
            ],
            temperature=config["model"].get("temperature", 0.7),
            stream=True,
        )
        for chunk in completion:
            text_chunk = chunk.choices[0].delta.content
            if text_chunk:
                print(text_chunk, end="", flush=True)
        print()  # Add a new line after the response
    except openai.OpenAIError as error:
        print(f"Error: {error}")

There are plenty of similar ideas out there, but it’s worth building something like this yourself. It’s a great way to understand how it works under the hood and to have some fun along the way! 🚀

Leave a Reply

Your email address will not be published. Required fields are marked *