Hi, Recently I was a speaker at “Pieces of Memory” hosted by Ali Mustafa. We talk about a lot of things realted to Large Language Models.
We talked about how we can fine tune LLM models as per our needs, what’s the best practices, security and privacy, what to do and what to not.
Fine Tunning an LLM model is not an easy task, it require a huge amount of effort, the most effective method is Supervised Fine Tunnine (SFT).
It’s accuracy is good?
Yes
But it require huge amount of effort, think about it, for SFT you need data for data you need a team to create this data.
Can I do it alone?
Short answer “No”
Why?
Because, we are humans, we have biases, we are bound by our own knowledge and understanding. We are not even aware of our own biases, I like some things and hate some things, subconsciously we introdced these Biases in the data and lateron it get reflected in output in LLM models.
Miss the event?
Checkout here: https://www.linkedin.com/events/7350941011352707073/about/