Evaluation of Different Large Language Model Agent Frameworks for Design Engineering Tasks
Paper in proceeding, 2024

This paper evaluates Large Language Models (LLMs) ability to support engineering tasks. Reasoning frameworks such as agents and multi-agents are described and compared. The frameworks are implemented with the LangChain python package for an engineering task. The results show that a supportive reasoning framework can increase the quality of responses compared to a standalone LLM. Their applicability to other engineering tasks is discussed. Finally, a perspective of task ownership is presented between the designer, the traditional software, and the Generative AI.

Design Cognition

Artificial Intelligence (AI)

Large Language Models (LLM)

Design Automation

Author

Alejandro Pradas Gómez

Chalmers, Industrial and Materials Science, Product Development

Massimo Panarotto

Chalmers, Industrial and Materials Science, Product Development

Ola Isaksson

Chalmers, Industrial and Materials Science, Product Development

DS 130: Proceedings of NordDesign 2024


978-1-912254-21-7 (ISBN)

DS 130: Proceedings of NordDesign 2024,
Reykjavik, Iceland,

DEFAINE (Design Exploration Framework based onAI for froNt-loaded Engineering)

VINNOVA (2020-01951), 2020-09-01 -- 2023-08-31.

Subject Categories

Other Computer and Information Science

Other Mechanical Engineering

Aerospace Engineering

Areas of Advance

Production

DOI

10.35199/NORDDESIGN2024.74

More information

Created

10/8/2024