Update: engineers updated the @Grok system prompt, removing a line that encouraged it to be politically incorrect when the evidence in its training data supported it.

  • KayLeadfoot@fedia.ioOP
    link
    fedilink
    arrow-up
    4
    ·
    5 hours ago

    ^-- to my knowledge, this is accurate.

    System prompts are the easy but wildly unpredictable way to change LLM output, but we really can’t back-trace or debug that output, we guess at what impact the s.p. edits will have.