Show HN: Execute local prompts in SSH remote shells

Posted by tgalal 4 hours ago

Counter7Comment2OpenOriginal

Instead of giving LLM tools SSH access or installing them on a server, the following command:

  $ promptctl ssh user@server
makes a set of locally defined prompts "magically" appear within the remote shell as executable command line programs.

For example, I have locally defined prompts for `llm-analyze-config` and `askai`. Then on (any) remote host I can:

  $ promptctl ssh user@host 
  # Now on remote host
  $ llm-analyze-config /etc/nginx.conf
  $ cat docker-compose.yml | askai "add a load balancer"
the prompts behind `llm-analyze-config` and `askai` execute on my local computer (even though they're invoked remotely) via the llm of my choosing.

This way LLM tools are never granted SSH access to the server, and nothing needs to be installed to the server. In fact, the server does not even need outbound internet connections to be enabled.

Github: https://github.com/tgalal/promptcmd/

Comments

Comment by lousyclicker 4 hours ago

That's a cool trick, but piping potentially sensitive server data back to your local machine snd through an external llm API kind of defeats the purpose of "never granting SSH" access. Also curious about latency.

Comment by tgalal 4 hours ago

The point is to avoid installing tools or granting LLM access and the "steering wheel" on the server itself. The data you pipe is the same data you'd copy-paste into ChatGPT or similars anyway. There is certainly bit of latency when piping/reading a lot of of data into the context, as everything is tunneled through the local machine, but I'd argue that the context size being limited by the llm itself makes it acceptable for most use cases.