nVidia has CUDA, AMD has ROCm and then...if you're not lucky, you have nothing, for GPU accelerated LLM workloads? NO! Because Ollama now supports Vulkan! This means that if, like me, you have a machine that's not cutting edge, you can still leverage the power of the GPU to accelerate the...