Self-reflecting Large Language Models: A Hegelian Dialectical Approach

  • ,
  • Can Goksen ,
  • Michael Solodko ,
  • Saeed Amizadeh ,
  • Julie E. Maybee ,
  • Kazuhito Koishida

|

Publication | Related File

Investigating NLP through a philosophical lens has recently caught researchers’ eyes, as it bridges computational methods with classical schools of philosophy. This paper introduces a philosophical framework inspired by the Hegelian Dialectic to enable LLMs’ self-reflection, utilizing a self-dialectical approach to emulate internal critiques and synthesize new scientific ideas (spanning do mains such as mathematics, physics, and more). Additionally, we explore the effect of generation temperature in LLMs by introducing a dynamic annealing approach, which encourages creativity in the early stages and gradually focuses on refinement and nuance, as well as a constant temperature strategy. Furthermore, we implement a Multi-Agent Majority Voting (MAMV) strategy to assess the validity and novelty of the generated ideas, which proves useful in the absence of domain experts. We also evaluate the effectiveness of our method in generating novel scientific ideas and improving LLMs’ reasoning capabilities. Our experiments demonstrate promising results in ideation, along with significant improvements in mathematical and symbolic reasoning.