pcwelder 19 minutes ago

```

try:

    answer = chain.invoke(question)

    # print(answer) # raw JSON output

    display_answer(answer)
except Exception as e:

    print(f"An error occurred: {e}")

    chain_no_parser = prompt | llm

    raw_output = chain_no_parser.invoke(question)

    print(f"Raw output:\n\n{raw_output}")
```

Wait, are you calling LLM again if parsing fails just to get what LLM has sent to you already?

The whole thing is not difficult to do if you directly call API without Lang chain, it'd also help you avoid such inefficiency.

dcreater an hour ago

People still use langchain?

  • anshumankmr 7 minutes ago

    Its good for quickly developing something but for production, I do not think so.We used it for a RAG application I built last year with a client, ended up removing it piece by piece, and found our app responded faster.

    But orgs think its some sort of flagbearer of LLMs.As I am interviewing for other roles now, HRs from other companies still ask for how many years of exp I have with Langchain and Agentic AI.