WZ-IT Blog
Insights, tutorials and best practices from the world of Cloud, DevOps and Open Source
AI
Upgrade Announcement: Our Cloud GPU Servers Now Run on NVIDIA RTX 6000 Blackwell
We are now running the NVIDIA RTX 6000 Blackwell Max-Q instead of the RTX 6000 Ada in our AI Server Pro. This gives you significantly...
Install Paperless-AI (Linux) – One-Liner Installer Script with Docker & Caddy HTTPS
Paperless-AI is an AI extension for Paperless-ngx that automatically analyzes incoming documents and assigns appropriate tags. Whether using OpenAI, local models with Ollama, or Azure...
Install Open WebUI (Linux) – One-Liner Installer Script with Docker & Caddy HTTPS
Open WebUI is a powerful self-hosted web interface for Large Language Models. Whether you want to run local models with Ollama or connect to OpenAI,...
GPT-OSS 120B on AI Cube Pro: Run OpenAI's Open-Source Model Locally
With GPT-OSS 120B, OpenAI released their first open-weight model since GPT-2 in August 2025 – and it's impressive. The model achieves near o4-mini performance but...
Local AI Inference with our AI Cube: Your AI Infrastructure Under Your Own Control
In times of rising cloud costs, data sovereignty challenges and vendor lock-in, the topic of local AI inference is becoming increasingly important for companies. With...
Ollama vs. vLLM - The comparison for self-hosted LLMs in corporate use
More and more companies are considering running Large Language Models (LLMs) on their own hardware rather than via cloud APIs. The reasons for this are...
Let's Talk About Your Idea
Whether a specific IT challenge or just an idea – we look forward to the exchange. In a brief conversation, we'll evaluate together if and how your project fits with WZ-IT.

Timo Wevelsiep & Robin Zins
CEOs of WZ-IT



