Evaluation of Different Large Language Model Agent Frameworks for Design Engineering Tasks
Paper i proceeding, 2024

This paper evaluates Large Language Models (LLMs) ability to support engineering tasks. Reasoning frameworks such as agents and multi-agents are described and compared. The frameworks are implemented with the LangChain python package for an engineering task. The results show that a supportive reasoning framework can increase the quality of responses compared to a standalone LLM. Their applicability to other engineering tasks is discussed. Finally, a perspective of task ownership is presented between the designer, the traditional software, and the Generative AI.

Design Cognition

Artificial Intelligence (AI)

Large Language Models (LLM)

Design Automation

Författare

Alejandro Pradas Gómez

Chalmers, Industri- och materialvetenskap, Produktutveckling

Massimo Panarotto

Chalmers, Industri- och materialvetenskap, Produktutveckling

Ola Isaksson

Chalmers, Industri- och materialvetenskap, Produktutveckling

DS 130: Proceedings of NordDesign 2024


978-1-912254-21-7 (ISBN)

DS 130: Proceedings of NordDesign 2024,
Reykjavik, Iceland,

DEFAINE (Design Exploration Framework based onAI for froNt-loaded Engineering)

VINNOVA (2020-01951), 2020-09-01 -- 2023-08-31.

Ämneskategorier

Annan data- och informationsvetenskap

Annan maskinteknik

Rymd- och flygteknik

Styrkeområden

Produktion

DOI

10.35199/NORDDESIGN2024.74

Mer information

Skapat

2024-10-08