Logic - more than just a magic 8 ball

Why I Built JARVIS A Private AI for Thinking, Not Magic

By David Rogers

The Magic 8 Ball Problem

People expect AI to be like an articulate magic 8 ball, when you ask it a question and it spits out the answer and you marvel at how it could be so clever, then follow it blindly with no understanding as to how or why you are at this point. While there might be a place for this, the wonder and delight of first time users, there is problems and missed opportunities.

People who use the chat agents such as those by openAI, Anthropic, Google to name a few, use it to get an answer to a question. 'How can I build an app', or 'write a story for me'. It removes the use of critical thinking if they blindly copy paste the output.

From my experience at a commercial level, entrepreneurs, managers and CEO's are all clamouring to this concept like moths to a flame. Some will just ask their developers to use the tools to be faster, or worse removing developers after seeing a short video of a person, who built an entire app from one prompt.

Using chat agents like this, things break, and it can get ugly. But I hear you asking 'how'.

You make bad decisions because they don't understand the reasoning

- You lose the ability to think critically (outsource thinking)
- You miss context your own experience would have caught
- your teams don't develop better—they just get "faster wrong answers"
- Most importantly: No ownership of thoughts and ideas, no creativity, mental factory line worker.

I'll come back to the last point a bit later in why I decided to build JARVIS, but you could understandably think at this point that I am just against AI agents all together. This could not be farther from the truth.

I'm confident that every person has an 'aha' moment when looking at AI agents. For most I suspect it stops at the start of this article, looking at the single text input and waiting for the generative magic 8 ball to tell them the answer, 'what is 100 degress farenheit in celcius?'. Some though are more profound, and one that I believe i have experienced. AI can have a persona, and it can mimic you in a very helpful and soulful way.

A Different Way: AI as Thought Partner

The real turning point for me was changing the narrative. Instead of asking for the answer to a problem, I asked it to ask me questions to come up with my answer. I've learned that allowing or asking the AI to ask you questions has a compounding effect. You get to impart some of your self to it, and feel some control over what is being discussed. And most importantly the answers you get back you understand more. It's a reflection of yourself.

The Process (When It Works):

1. Articulate your problem/question clearly
2. Ask AI to ask you clarifying questions
3. Answer honestly and specifically
4. Listen to what the questions reveal about what you already know
5. When you disagree, trust that disagreement—it's signal
6. The "answer" becomes YOUR thinking

I've told members of my own family who were struggling come to a decision about a problem, when I couldn't help them, to articulate the problem in real terms to AI and ask it to ask them questions to help them come to a conclusion. The key though is being honest and specific. In one example a friend was having a career crisis, and the result of this thought process resulted in them taking a leap they already knew they had to take in starting a new role elsewhere. They have since been flying.

In the above examples, they have been using a paid subscription in most cases. In some it has been the free open options. All of these are developed and coded to a certain persona, and you can change its way of thinking through prompting, but this is inherently the model that you are using. It will have biases, and it will hallucinate. But how do you know?

You question it. You use your own knowledge to your advantage. You can disagree and you should! There is the moment when you disagree. This is OK. How can a model know everything that ever existed, including thoughts that you have never had? The AI isn't the authority. You are. When you disagree, that's not failure—that's a thought partnership working.

Why This Matters to Me (And Why I am Building JARVIS)

This philosophy—AI as thought partner, questions over answers, disagreement as signal—I realised I couldn't fully live it with a subscription service. Every time I logged into ChatGPT or Claude, I was working within someone else's system, influenced by their training, their business model, their priorities. If I wanted a true thought partner—one that would grow with me, understand my specific thinking patterns, and remain sovereign to my own ideas—I needed to build it myself. On my own computer. Under my control. That's when I decided to build JARVIS.

On-Prem, Sovereignty, and Your Thoughts

On my own computer. Under my control. Ownership of thoughts and ideas. All the previous three sentences, all mentioned above can only happen on premises (on-prem). Well I could do it in a private cloud, but that means quite a bit of investment, and I'm still at the mercy of the provider.

If I have it locked down in my own network, on my own hardware, using my own VRAM, I am in total control of the AI I want to use. All the proprietary information I put in stays with me. I'm only really at the mercy of a power bill and related power failures. These could even be managed through some home solar panels and a battery backup.

I want my thought partner / assistant to learn from me, have connections to my data and files, but I don't want to have to give it to an online service. Any prompt engineering, model changes, testing and even security is wholly owned by me, controlled by me. I am going to create a synthetic connection from my mind to the AI 'soul' and this is something I want to have total ownership of. It's not for anyone else, it's for me. But ownership without customisation isn't enough. The real power of on-prem isn't just control—it's personalisation. Now that I own the system, I can engineer it to match how I think, how I learn, how I work. This is where personality matching comes in..

Personality Matching: The Next Evolution

Everyone learns and converses differently. In fact, like fingerprints, everyone's mind is unique. The way you engineer agent prompts, you can make the agent into any manner of persona. A doctorate in philosophy, a drunk scallywag, or even a lonely pigeon named Greg. I actually did that one—it was very humorous.

People learn and adapt in different ways, and having a professor explain it to you as opposed to your football team mate matters. You will take it in better, speak to it more openly without fear of AI judgment.

There is an industry that will open where humans will identify what type of agent would work best for employees, and engineer the prompts to match their persona and the work that they need to do.

There are some real benefits to this:

- Some people learn better from an authority figure
- Some from a peer
- Some from something weird and memorable that keeps them engaged (Thanks Greg)
- Matching agent personas to employee personalities solves the judgment problem and increases engagement

This is another solid foundational reason that I want to create Jarvis in my style, the way I need it to be.

What JARVIS is

JARVIS is a nod to the Iron Man movie series. It's essentially the original 'vibe-matched' AI that knows Tony Stark, works with him, and adapts to him and his thoughts and projects. I remember seeing it as my younger self and thinking, 'that is so awesome, imagine what I could do with an assistant like that.' It's now actually possible, nigh probable.

With tools like Ollama, open source models, custom prompts, and projects like ClawdBot (an open-source agent framework), I can build an agent suited to me. — and that's exactly what I did.

JARVIS 0.1 exists. Currently, it's a simple chat interface I connect to via VPN on my home computer. It's completely closed off to outside users. It is however very junior. With a lack of VRAM I can only get the smallest models on there that don't really seem to provide good responses to prompts. But the foundation is there, and the path forward is clear and achievable, albeit a few dollars for a larger GPU. But this is a controlled cost.

I could theoretically ask it to do some tasks for me, but I don't want to do that until I am confident that it understands, knows me, and gives me the correct responses. I might be giving it the ability to spend my money, and I want to know it would only spend where I would, take a risk where I would, and say no when I would.

The Business Case for Personality-Matched Agents

I am embarking on this because its coming, and we need to understand now before its embedded in our work lives. It is also one of the major pieces that will make a significant impact to businesses and their employees.

Customised thought partners are going to increase the quality of output of employees in a shortened time frame. That is dollars in the bank! With on-prem services too, you are going to be able to control your spend a bit easier, you may not need that enterprise license where your data is being housed outside your walls, dangling your IP out there for all to see.

Take performance management as an example. A difficult employee might struggle with authority—they shut down when a manager gives them direct feedback, but open up to peer coaching. Imagine giving them a personality-matched agent that acts as a peer mentor: asks questions instead of commands, celebrates small wins, frames feedback as collaborative problem-solving. The same person who was defensive in one-on-ones suddenly becomes engaged because the way the feedback is delivered matches their learning style.

Conversely, get the persona wrong and you amplify the problem. A naturally anxious employee paired with an overly casual, dismissive agent persona could become more insecure, not less. The tool doesn't fix bad judgment—it amplifies whatever energy you give it.

This is both the promise and the risk of personality-matched agents. Done well, it could transform struggling employees into high performers. Done poorly, it could make things worse. That's one of a few critical things to get right in this new wave of agents. I hope though that my journey to getting JARVIS to the way I need them will help guide and grow the industry.

What's Coming?

Over the coming months, I'll be documenting this journey honestly. The wins. The failures. What works, what doesn't, and what I learn along the way. Because building JARVIS under real constraints—limited VRAM, real work to do, actual personalities to match—isn't a polished case study. It's an experiment. And I want you to follow along and see what we discover about thought partners, personality matching, and on-prem agents in the real world.