Debug Mode for LLMs in vLLora

Posted by mrun1729 4 days ago

Counter46Comment4OpenOriginal

Comments

Comment by kappuchino 12 hours ago

Until https://github.com/vllora/vllora/tree/v0.1.6 it was Apache licensed. Then Elastic Search 2. Nah.

IMHO the "don't remove anything with a licensekey ever" part in the license is the kind of potential poison that I would never recommend this to my or any other company. More than a few fellow engineers consider nagware an insult and see the potential to twist your arm late in the game making former free functions part of a new "optimized pay package", which you need because you can't fix the bug in the goddamn license part that is a security risk. LOL. (Not saying that you ever do. See below)

And there is no moat, debugging AI flows is a few prompts and a claude code max, google gemini pro or codex whatever for a couple of days while doing the usual things will happen.

Note: Its not about this software specific. I learned that the cuts and bruises of incidents before you come along are the ones that shape behaviour of your partners/colleagues/peers. You may have the purest intentions and best approaches, but someone longe before you ruined it. Its not you, its you chosing the same path.

Comment by omneity 11 hours ago

What a strange naming choice, mixing two things (vLLM and LoRA) while being related to neither..

Comment by 12 hours ago

Comment by _pdp_ 11 hours ago

interesting but ... why not debug the actual code that is invoking the API.. like break point at the right place, edit state, step over, resume... it seems that the toolchain is a lot more mature and it will fit right into the specific programming environment that is targeted

Comment by suprjami 10 hours ago

Because this is way easier. It's effectively a printf debugger and editor you can just slot in the middle of the data stream.