0

Yesterday I gave a task to ChatGPT and noticed despite the clear step-by-step task to return the information from a simple table with 38 records, it stops to generate right response and after some back and fourth it returns me 15/16 records and I need to ask for to return me complete it for all records of table (table has multi columns):

please complete the task for all records of table. you return 15 out of 38 records.

an then it returns with increment of 1 records more than previous generated answer which was frustrating to chat like this and get still incomplete results.

At the end I did trick and I asked to start from end of table and merge the results!

  • I'm curious why it the case is for doing the clear task incompletely? I was assuming maybe there is restriction to generate long answers but returning info of a table with 38 records should not be called long task in sense of length of generated return response?
  • Are there some prompt engineering techniques I should now except for define the task step-by-step and use right vocab?

Note:

  • I had a premium account while did this experiment and conversation.
  • used model: GPT-4o
Mario
  • 103
  • 2

1 Answers1

0

For closed-source models we can only guess why it doesn't return all the rows. For open-source models I know that there are repetition and length penalties you can adjust (https://docs.vllm.ai/en/v0.4.0.post1/dev/sampling_params.html). For GPT-4o there could as well be a repetition or length penalty we don't know.

Why don't you split the table in smaller chunks and send each separately to the model?

r000bin
  • 116