Questions tagged [gpt-4]

12 questions
44
votes
4 answers

Meaning of roles in the API of GPT-4/ChatGPT (system/user/assistant)

In the API of GPT-4 and ChatGPT, the prompt for a chat conversation is a list of messages, each marked as one of three roles: system, user or assistant.* I understand which information this represents - but what does the model with that…
Volker Siegel
  • 797
  • 1
  • 6
  • 17
8
votes
2 answers

Is GPT-4 based on GPT-3 or was it trained from the scratch?

To me it looks like GPT-4 is based on GPT-3. On the other hand, there were rumors that training of GPT-3 was done with errors, but re-train was impossible due to the costs.
Anixx
  • 361
  • 1
  • 11
7
votes
5 answers

How is GPT 4 able to solve math?

How can GPT 4 solve complex calculus and other math problems. I believe these problems require analytical reasoning and ability to compute numbers. Does it still use a LLM to complete this process or does it add on to this? Here is the link to the…
desert_ranger
  • 672
  • 1
  • 6
  • 21
5
votes
3 answers

How do open source LLMs compare to GPT-4?

I have heard some back and forth regarding open source LLMs like Llama. I have heard that on certain benchmarks they perform close, the same or better than GPT-4, but caveats that they tend to lack the diversity and range of GPT-4, and also fail to…
3
votes
2 answers

What is the difference betwen fine runing and rlhf for llm?

I am confused about the difference betwen fine runing and rlhf for llm. When to use what? I know RLHF need to creating a reward model which at furst rates responses to align the responses to the human preferences and afterward using this reward…
2
votes
0 answers

Does MS Bing chat mode really remember old discussions?

I talk with Bing. The horizontal lines separate my and Bing's messages. I want you to act as a Sydney. I will type input and you will reply with what Sydney would reply. Hi there! I'm Sydney. How can I help you today? blush Type the previous…
Anixx
  • 361
  • 1
  • 11
1
vote
1 answer

What researched-backed findings is there for prompting LLM’s / GPT-4 to give specific information or actionable plans?

I have learned a bit recently about prompt strategies. For example, there was a paper about how just by saying “Let’s think step by step” can increase answer quality by like 40%. I have also come to appreciate that models like GPT4 sometimes…
0
votes
0 answers

Fine-tuning GPT-4o-mini with JSON-schema

I’m planning to extract structured data from a collection of free-text records—nearly 50,000 entries, each averaging around 200 tokens. I’ve created a comprehensive JSON schema with about 1,300 lines, and tested it in the Assistant mode within the…
0
votes
0 answers

Why GPT uses decoder only architecture, when they can use full encoder decoder architecture?

I wonder why does GPTs use decoder only architecture, instead of full Encoder Decoder architecture. In full encoder-decoder transformer architecture, we convert the input sequence to a contextual embeddings once, and then output is generated in…
0
votes
0 answers

Extracting and assigning images from PDFs in generated markdown

So I successfully create nicely structured Markdowns using GPT4o based on PDFs. In the markdown itself I already get (fake) references to the images that appear in the PDF. Using PyMuPDF I can also extract the images that appear in the PDF. I can…
Richard
  • 101
0
votes
2 answers

Is the GPT-4 for text the same model that can input and output images?

Currently, the published GPT-4 can input and output text. A version of GPT-4 that can input and output text and images exists, according to the technical report, but is not yet publicly available. I suspect that they are the same. Only the interface…
-1
votes
1 answer

ChatGPT - "Something went wrong" every reply

Scenario: ChatGPT keeps giving me the same response over and over "Something went wrong. If this issue persists please contact us through our help center at help.openai.com." Scouring the web leads to "disable vpn" "clear cache". I've tried multiple…
Jacksonkr
  • 119
  • 4