Fine-tuning Llama 2 not affecting output

I have been trying to fine-tune Llama 2 (7b) for a couple of days and I just can’t get it to work.

I tried both the base and chat model (I’m leaning towards the chat model because I could use the censoring), with different prompt formats, using LoRA (I tried TRL, LlamaTune and other examples I found).

It doesn’t fail, but when I run the fine-tuned model, I don’t see any difference in the output, it’s like nothing changed. Do you have any ideas on what could be happening? Or a guide that worked for you I could follow?

Thanks!