On this tutorial, we stroll by means of developing a compact nonetheless completely sensible Cipher-based workflow. We start by securely capturing our Gemini API key inside the Colab UI with out exposing it in code. We then implement a dynamic LLM alternative carry out which will mechanically change between OpenAI, Gemini, or Anthropic based totally on which API secret is accessible. The setup part ensures Node.js and the Cipher CLI are put in, after which we programmatically generate a cipher.yml configuration to permit a memory agent with long-term recall. We create helper options to run Cipher directions instantly from Python, retailer key mission decisions as persistent recollections, retrieve them on demand, and eventually spin up Cipher in API mode for exterior integration. Attempt the FULL CODES proper right here.
import os, getpass
os.environ["GEMINI_API_KEY"] = getpass.getpass("Enter your Gemini API key: ").strip()
import subprocess, tempfile, pathlib, textwrap, time, requests, shlex
def choose_llm():
if os.getenv("OPENAI_API_KEY"):
return "openai", "gpt-4o-mini", "OPENAI_API_KEY"
if os.getenv("GEMINI_API_KEY"):
return "gemini", "gemini-2.5-flash", "GEMINI_API_KEY"
if os.getenv("ANTHROPIC_API_KEY"):
return "anthropic", "claude-3-5-haiku-20241022", "ANTHROPIC_API_KEY"
enhance RuntimeError("Set one API key sooner than working.")
We start by securely coming into our Gemini API key using getpass so it stays hidden inside the Colab UI. We then define a choose_llm() carry out that checks our environment variables and mechanically selects the acceptable LLM provider, model, and key based totally on what’s accessible. Attempt the FULL CODES proper right here.
def run(cmd, look at=True, env=None):
print("▸", cmd)
p = subprocess.run(cmd, shell=True, textual content material=True, capture_output=True, env=env)
if p.stdout: print(p.stdout)
if p.stderr: print(p.stderr)
if look at and p.returncode != 0:
enhance RuntimeError(f"Command failed: {cmd}")
return p
We create a run() helper carry out that executes shell directions, prints every stdout and stderr for visibility, and raises an error if the command fails when look at is enabled, making our workflow execution further clear and reliable. Attempt the FULL CODES proper right here.
def ensure_node_and_cipher():
run("sudo apt-get change -y && sudo apt-get arrange -y nodejs npm", look at=False)
run("npm arrange -g @byterover/cipher")
We define ensure_node_and_cipher() to place in Node.js, npm, and the Cipher CLI globally, guaranteeing our environment has all of the required dependencies sooner than working any Cipher-related directions. Attempt the FULL CODES proper right here.
def write_cipher_yml(workdir, provider, model, key_env):
cfg = """
llm:
provider: {provider}
model: {model}
apiKey: ${key_env}
systemPrompt:
enabled: true
content material materials: |
You are an AI programming assistant with long-term memory of prior decisions.
embedding:
disabled: true
mcpServers:
filesystem:
variety: stdio
command: npx
args: ['-y','@modelcontextprotocol/server-filesystem','.']
""".format(provider=provider, model=model, key_env=key_env)
(workdir / "memAgent").mkdir(dad and mother=True, exist_ok=True)
(workdir / "memAgent" / "cipher.yml").write_text(cfg.strip() + "n")
We implement write_cipher_yml() to generate a cipher.yml configuration file inside a memAgent folder, setting the chosen LLM provider, model, and API key, enabling a system quick with long-term memory, and registering a filesystem MCP server for file operations. Attempt the FULL CODES proper right here.
def cipher_once(textual content material, env=None, cwd=None):
cmd = f'cipher {shlex.quote(textual content material)}'
p = subprocess.run(cmd, shell=True, textual content material=True, capture_output=True, env=env, cwd=cwd)
print("Cipher says:n", p.stdout or p.stderr)
return p.stdout.strip() or p.stderr.strip()
We define cipher_once() to run a single Cipher CLI command with the provided textual content material, seize and present its output, and return the response, allowing us to work along with Cipher programmatically from Python. Attempt the FULL CODES proper right here.
def start_api(env, cwd):
proc = subprocess.Popen("cipher --mode api", shell=True, env=env, cwd=cwd,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT, textual content material=True)
for _ in differ(30):
try:
r = requests.get("http://127.0.0.1:3000/nicely being", timeout=2)
if r.okay:
print("API /nicely being:", r.textual content material)
break
moreover: cross
time.sleep(1)
return proc
We create start_api() to launch Cipher in API mode as a subprocess, then repeatedly poll its /nicely being endpoint until it responds, guaranteeing the API server is ready sooner than persevering with. Attempt the FULL CODES proper right here.
def most essential():
provider, model, key_env = choose_llm()
ensure_node_and_cipher()
workdir = pathlib.Path(tempfile.mkdtemp(prefix="cipher_demo_"))
write_cipher_yml(workdir, provider, model, key_env)
env = os.environ.copy()
cipher_once("Retailer decision: use pydantic for config validation; pytest fixtures for testing.", env, str(workdir))
cipher_once("Be mindful: observe customary commits; implement black + isort in CI.", env, str(workdir))
cipher_once("What did we standardize for config validation and Python formatting?", env, str(workdir))
api_proc = start_api(env, str(workdir))
time.sleep(3)
api_proc.terminate()
if __name__ == "__main__":
most essential()
In most essential(), we select the LLM provider, arrange dependencies, and create a quick working itemizing with a cipher.yml configuration. We then retailer key mission decisions in Cipher’s memory, query them once more, and eventually start the Cipher API server briefly sooner than shutting it down, demonstrating every CLI and API-based interactions.
In conclusion, we’ve now a working Cipher environment that securely manages API keys, selects one of the best LLM provider mechanically, and configures a memory-enabled agent solely by means of Python automation. Our implementation consists of decision logging, memory retrieval, and a keep API endpoint, all orchestrated in a Pocket guide/Colab-friendly workflow. This makes the setup reusable for various AI-assisted progress pipelines, allowing us to retailer and query mission data programmatically whereas defending the environment lightweight and simple to redeploy.
Attempt the FULL CODES proper right here. Be at liberty to try our GitHub Internet web page for Tutorials, Codes and Notebooks. Moreover, be at liberty to look at us on Twitter and don’t overlook to hitch our 100k+ ML SubReddit and Subscribe to our Publication.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is devoted to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth safety of machine learning and deep learning data that’s every technically sound and easily understandable by a big viewers. The platform boasts of over 2 million month-to-month views, illustrating its popularity amongst audiences.
Elevate your perspective with NextTech Data, the place innovation meets notion.
Uncover the most recent breakthroughs, get distinctive updates, and be part of with a world neighborhood of future-focused thinkers.
Unlock tomorrow’s developments as we converse: study further, subscribe to our publication, and switch into part of the NextTech neighborhood at NextTech-news.com
Keep forward of the curve with NextBusiness 24. Discover extra tales, subscribe to our publication, and be part of our rising neighborhood at nextbusiness24.com

