[go: up one dir, main page]

DEV Community

# llm

Posts

👋 Sign in for the ability to sort posts by relevant, latest, or top.
Stop Fighting with Pandas: Let Prompt Drive Your DataFrames

Stop Fighting with Pandas: Let Prompt Drive Your DataFrames

Comments
12 min read
Stop the LLM From Rambling: Using Penalties to Control Repetition

Stop the LLM From Rambling: Using Penalties to Control Repetition

Comments
5 min read
Stop Parsing Nightmares: Prompting LLMs to Return Clean, Parseable JSON

Stop Parsing Nightmares: Prompting LLMs to Return Clean, Parseable JSON

Comments
9 min read
When Generated Tests Pass but Miss the Bug: A Case of False Confidence from AI Test Generation

When Generated Tests Pass but Miss the Bug: A Case of False Confidence from AI Test Generation

Comments
3 min read
Scaling Autonomy: Architecting Cost-Efficient Agentic AI for the Enterprise

Scaling Autonomy: Architecting Cost-Efficient Agentic AI for the Enterprise

Comments
6 min read
When an AI Suggests DataFrame.append: Missing Pandas Deprecations in Generated Code

When an AI Suggests DataFrame.append: Missing Pandas Deprecations in Generated Code

Comments 1
3 min read
Prompting for Safety: How to Stop Your LLM From Leaking Sensitive Data

Prompting for Safety: How to Stop Your LLM From Leaking Sensitive Data

Comments
9 min read
When code assistants suggest deprecated Pandas APIs: a subtle, production-breaking failure mode

When code assistants suggest deprecated Pandas APIs: a subtle, production-breaking failure mode

Comments
3 min read
When AI Refactors Break Naming: a case of inconsistent variable renames across files

When AI Refactors Break Naming: a case of inconsistent variable renames across files

Comments
3 min read
GemDesk: Reason across all your data.

GemDesk: Reason across all your data.

Comments
1 min read
When code suggestions push deprecated Pandas APIs: a postmortem

When code suggestions push deprecated Pandas APIs: a postmortem

Comments
3 min read
Prompt Injection Attacks: The Hidden Security Threat in AI Applications

Prompt Injection Attacks: The Hidden Security Threat in AI Applications

Comments
14 min read
KitOps Wrap 2025🔥

KitOps Wrap 2025🔥

Comments
3 min read
When Codegen Suggests Deprecated Pandas APIs — a Cautionary Tale

When Codegen Suggests Deprecated Pandas APIs — a Cautionary Tale

Comments
3 min read
Building Shared Memory with AI

Building Shared Memory with AI

Comments
5 min read
Instructions Are Not Control

Instructions Are Not Control

1
Comments 1
3 min read
Building an AI-Powered Portfolio Assistant with Model Context Protocol

Building an AI-Powered Portfolio Assistant with Model Context Protocol

1
Comments
15 min read
LLMs Don’t Have a Security Layer — So I Built One

LLMs Don’t Have a Security Layer — So I Built One

Comments
2 min read
When long chats change the code: context drift and hidden errors

When long chats change the code: context drift and hidden errors

Comments
3 min read
LLM Orchestration Architecture

LLM Orchestration Architecture

Comments
2 min read
I Built an AI Tarot Reading Tool That Goes Deeper Than Just Card Meanings

I Built an AI Tarot Reading Tool That Goes Deeper Than Just Card Meanings

Comments
2 min read
When an LLM Renames Things: Inconsistent Variable Naming During Multi-file Refactors

When an LLM Renames Things: Inconsistent Variable Naming During Multi-file Refactors

Comments
3 min read
Context drift and hidden errors in long AI-assisted coding sessions

Context drift and hidden errors in long AI-assisted coding sessions

Comments
3 min read
Debugging Non-Deterministic LLM Agents: Implementing Checkpoint-Based State Replay with LangGraph Time Travel

Debugging Non-Deterministic LLM Agents: Implementing Checkpoint-Based State Replay with LangGraph Time Travel

1
Comments 2
15 min read
I Trained Probes to Catch AI Models Sandbagging

I Trained Probes to Catch AI Models Sandbagging

Comments
6 min read
loading...