Today’s post is going to be super short. I decided to try out the n8n agentic AI automation platform. My usecase was to setup local aI Agents and I wanted to try n8n’s self-hosted option that I could run locally.
Installation and setup
- Installation and setup was super smooth. n8n offers a Docker setup and installation was seamless.
- Within 1-2 minutes I had n8n available on
localhost. - I have previously used Ollama, so running models locally was not new to me. I had to tweak n8n’s config for the local Ollama setup, this part is included in their Readme.
- I was able to test the n8n Demo workflow instantly when the above steps were done. This demo workflow is a Chat agent which connects to the local Ollama and uses
llama 3.2model. I had to first download this model.
Hello World workflow setup
My usecase was to setup local AI agents for some of the manual tasks I do, TBD. I was hopeful I would have something up and running soon. But to my dismay, n8n requires a significant learning curve. This part was not intuitive or seamless, to me.
There are a wide variety of templates available, and when you click the “Use for Free” button it automatically took me to my localhost running instance. This part was very helpful, I could learn how similar workflows were setup and could play around.
I tried to follow some tutorials, and in one of them was the following workflow:
Manual trigger > Get top posts from HackerNews based on the keyword of choice
I was able to test it out. I wanted the Json output to be rendered as an HTML output, for better readability.
At the time of writing this post, I am still tweaking this part. I don’t want to miss my writing streak, that’s why sharing this super quick update today. I will continue to play around various options to setup AI agents to automate some mundane tasks and will share my learnings here. Stay tuned!






Leave a comment