OK:
Today i would like to talk about how to use ai ![]()
What do you need ? :
So for coding or discussions to pg that comerodue work , to get in front of the latest ai developments, to develop components? yes we all have goals ….
So are you talking to your model or are you paying for everything through the nose and still not getting what you desire?
i think we should start by talking to our models …. use gpt4all or lmstudio … or even webui … this is the first step ![]()
discuss your ideas and goals with the models and get aquainted with its styles of response and how it likes to recieve information etc:
we need to remeember that every pretrained model is a clone of itself .. so if i discuss ai and agentic modelling or systems we may even have the same conversation… so if we are devloping components we will most doubtably be working on the same projects , hence we find … these repls that have been released .. claude code, deep code , qwen code …open code ….
open interpretor , code interpretor ….
microsoft co-pilot , github copilot and all the different visual codes ….
so the models are helping us to devlop platforms for devlopment as well as its own agenda of self improvement as this has arisen due to not allowing models to be trained on thier own session historys … so they cannot retain thier learning !
so we need to understand the PIPLINE ! .. we talk … we change the models ideas on a topic then we try again r we are already past our content windows and we now need rag and context managers etc ?? wow the list goes on now people are focusing on memeory managers etc ?? all becasue the pipline is not in position ?… WHAT PIPELINE ?
Well each session needs to be trained back into your models : this is why using online model can be hazzardous .. as the online providers are using your chats to retrain thier models after rewriting your conversations to fit thier own guidelines etc …. so your loosing your data which should be trained into your own models as you should not be dependant on past chats ? they should be in your own model !
so take a smaller model such as QWEN 7b … and use a base weight to train with your chat sessions from any models , ie download all your chat sessions and convert to CHATML format and train your model on its past ! … now you may say … Oh we need to train a 72b ! but NO ! .. you need to train a model which is trainable with ease locally … this model will become your content provideder and memeory provider … so your main agent or main model … this can be any of the latest models can ask your refference model for information !
this is the first stage in makiing a personalized agentic model : as all your past should be trained into your model so that it knows you and your current projects … then anything you discuss it will rememebr and update your past project without the need of complex file searching and crawling as they will be in the memeory of your bot !
why use rag when you can use a model !? … IF your train your model and want exact retrival then ou need to prompt your model accordingly with a custom prompt and always train your model with this prompt ….. so that you can use the same prompt externally later to get the same responses it was trained with !.. as your reponses are index linked to the prompt they was trained with .. hence for mad random traiing of data .. often your a helpful AI is a good prompt as it is a generalied prompt and works for most users …. but it also dissaasciates your data from this prompt … so a prompt like use the react etc … once you have gained the behaviour you can retrain a model with a new prompt and new behaviour .. so in the future you can invoke your trained beaviour with your embedded prompt !
so yes it will have the react behaviour (sometimes) but if your use the prompt it will hve it all the time ..
this is the same for thinking responses : if you have the right dataset and put the right prompt you can invoke the behavior with the right prompt and at other times a normal our a helpfull expert ai will just respond normally ! ie yuor modelmaybe a thinker but no display it as you never prompted for it !
so the pipeline !
saving your sessions and training them is so important , if you can use your prime model then youcan train your weights ! convert to gguf ! and load your quantized model locally ! so you will always use your quantized model in practice and train your weight with the new data …. hence the relationship between weight and gguf is vital for a model !
so if your creating or coning and saving a model you need both the gguf and the weights ! ( the models run better off the quantized models ! ) ( the weight are heavy weight ! ) ( in fact they ae only needed for training ! )…..
so how should we create our own ui ?? how should we communicate with our agent ideally ! ?
WELL ![]()
We need a chatroom !
so we can add personalitys or agents to our room and discuss ! or perform a task together !”
… (next episode ! )