Search:
Match:
2 results
Research#llm📝 BlogAnalyzed: Dec 26, 2025 23:30

Building a Security Analysis LLM Agent with Go

Published:Dec 25, 2025 21:56
1 min read
Zenn LLM

Analysis

This article discusses the implementation of an LLM agent for automating security alert analysis using Go. A key aspect is the focus on building the agent from scratch, utilizing only the LLM API, rather than relying on frameworks like LangChain. This approach offers greater control and customization but requires a deeper understanding of the underlying LLM interactions. The article likely provides a detailed walkthrough, covering both fundamental and advanced techniques for constructing a practical agent. This is valuable for developers seeking to integrate LLMs into security workflows and those interested in a hands-on approach to LLM agent development.
Reference

Automating security alert analysis with a full-scratch LLM agent in Go.

Research#llm📝 BlogAnalyzed: Dec 25, 2025 17:44

Integrating MCP Tools and RBAC into AI Agents: Implementation with LangChain + PyCasbin

Published:Dec 25, 2025 08:05
1 min read
Zenn LLM

Analysis

This article discusses implementing Role-Based Access Control (RBAC) in LLM-powered AI agents using the Model Context Protocol (MCP). It highlights the security risks associated with autonomous tool usage by LLMs without proper authorization and demonstrates how PyCasbin can be used to restrict LangChain ReAct agents' actions based on roles. The article focuses on practical implementation, covering HTTP + SSE communication using MCP and RBAC management with PyCasbin. It's a valuable resource for developers looking to enhance the security and control of their AI agent applications.
Reference

本記事では、MCP (Model Context Protocol)を使用して、LLM駆動のAIエージェントに RBAC(Role-Based Access Control)による権限制御を実装する方法を紹介します。