Posts

Showing posts from August, 2024

CENTER

 NEW one ain't rich aint a bitch is kinder  acts like a reminder a reminder of hope ain't a victim

Self-Supervised learning for dumber kids......

Reinforcement Learning for Dumb Kids-(agent, environment, reward, punishment, action)

Image
  Hola, todays article will cover the basics of Reinforcement learning, we will focus on RL basics cause the advanced concepts are tough, and i myself cant explain them. so RL in nibba terms, is just real life, we try to survive in our surrounding, surroundings are either know to us, or unknow to us, but due the the hardwired instincts we have, we can survive, and not only surviving ,  we have been dominating planet earth for last 10,000 years. Now what is reinforcement learning, let me introduce some terms for that  *agents *reward *punishment *environment "agent seeks reward in the environment and avoids punishment."      so u get idea right, its based on reward and punishment, but how is it different than machine learning well we aint working on data, we are interacting with the environment. some basic facts: RL is based on environments, so many parameters come into play, basically they variables are infinite. scenarios are real world. broader in sense. objective is to rea

EfficientNet and ImageNet

Image
                                             @image taken from original research paper.  Hello, i am back with another article related to CNN's The article deals with a new method of improving the efficiency of CNN's by using "Compound scaling method", in old models scaling was done, random without a specific patter, but using CSM, the model achieved better accuracy. But what is compound scaling, well if we wanna use 2 to power n computational resource, then we can increase the network depth, width and image size by alpha to power n, betta to power N and image size by gamma to power N, where alpha, beta, and gamma are constant coefficient determined by small grid search on original small model. Result: Compound Scaling Method are 2.5 % better than solely scaling Method parallel to that i would like to tell u about ImageNet Competition, which was help from 2011 to 2017, and ConvNet, GoogleNet, SENet, were a few winners of the competition, the task in those competition

A Very Long essay on Time Series forecasting

Image
 

Playing with Reinforcement Learning

Image
  The article will cover topics like Reinforcement learning, OpenAI gym, RL process, World Models and MDN-RNN Let me explain the story of RL with a example,  lets say u are thrown into a jungle, now how would u survive? u need to find food to survive, u need to fight carnivore animals and your need to reach a final destination where there are other humans!!! now this example explains how RL works this is the definition: The jungle is the "environment", you are the "agent" and the place and situation you are in is your "game state" the food u eat is the "reward", a day in jungle is "Episode", the time is simply timestep. and this is the process: Agents take action in the environment which would change the state and give u a reward(+/-). no this sounds vague, right, so how do i get a hands on experience of that, well for that u need to visit https://www.gymlibrary.dev/index.html Now how will our agent behave in this new environment, who w

Advanced Vision Problems

 Lets attack the problem straight: object measurement counting pose estimation image search

Bach and Mozart with RNN'S

Image
 

OPEN AI GYM

Image
 

BUILDING YOUR TECHNICAL PORTFOLIO

Image
 

MLOPS: making AI stable

Image
  HELLOOOO again, making AI stable, kinda confusing i know, but let me help u get the back-story to explain why i said that, lets say u made a movie recommendation system which recommends movies as per your taste, noww this is  a very strict linear condition which rarely helps in real life, real life is chaotic dynamic, and unstable, with lot of variables and scaling happening around uuuuu. Soo comes MLOPS in image, mlops means machine learning operations, which deals with basically making AI practical in real world!,  butttt what is done in MLOPS??? NOW A boring definition  of a ML enginner by the dictator/google "someone who designs builds and productionizes  ML models to solve business challenges" DESIGN IMPLEMENT AND DEPLOY the 3 idiots of MLOPS tools and processes used in MLE

LLM: giants behind AGI

Image
  Keywords: emergence, Transformers, "Attention is all you need", Encoder, Decoder, Generative AI,  classification task, parameters and tokens, prompt, large language models, generative pretrained transformers, BERT( Bidirectional encoder representation transformer). AI, the rizz of 21st century, is it a big thing now? is AI gonna take over the world? how will AI replace us? will AI replace us? every guy who is tech aware knows about the fear and dystopian emotions that ai has instilled among us in last 4 years. I would say AI might take over us and we gonna be like slaves, karma comes back, doesn't it.... well to understand how AI will take over humanity, we first need to understand how it works? and how it replaces us. getting into details we reach two major AI tools Chatgpt and DALL-E. One of them lies on the textual spectrum while the other lies on visual spectrum. This article DEALS WITH THE TEXTUAL SPECTRUM OR SIDE OF AI.... LET me be direct, the tech behind Chatgpt

Generative modelling: The mother of dystopia

Image
 

SeqToSeq

Image
  There has been a huge advancement in the field of NLP in the last 10-15 years, and these advancements are grown by models such as seq to seq models, which has huge applications in language translation, paragraph summarization. These seq to seq models use search algorithms such as greedy search and beam search. Architecture of seq to seq mode: The seq-to-seq model has 2 layers, one is encoder and the other is decoder. How does it work? Encoder translates the input into a representation, and that representation then converts it into the desired output. but what part of nlp is seq to seq models? they are actually under the field of neural machine translation. NL Translation has mainly two broad divisions of models: 1.Neural machine translation 2. Statistical Machine Translation. 

WORLD MODELS!

World models are GAN Models of popular reinforcement learning environments. How are they used?? well the are first trained in unsupervised manner to learn representations of our training environment and then fed into a agent. The agent is actually trained on a fixed policy now. The agent is also able to learn in an hallucinogenic environment and then transfers that learning to real world. To have  a demonstration visit https://worldmodels.github.io reference: https://arxiv.org/abs/1803.10122 read this research article by David Ha.