馃憢 Hi, I’m Gejun Zhu

Welcome to my learning journal. I learn by doing, building projects in analytics engineering, data pipelines, AI, and machine learning to truly understand how things work. Inspired by Feynman’s insight, “What I cannot create, I don’t understand,” I document what I build, what I break, and what I learn along the way. Currently transitioning from analyst to analytics engineer while exploring the world of LLMs and AI agents. If you believe in learning through hands-on experimentation, you’re in the right place.

馃搶 Understanding Transformer Architecture by Building GPT

In Part2, we constructed a straightforward MLP model to generate characters based on 32k popular names. In this lecture, Andrej guides us on gradually incorporating the transformer architecture to improve the performance of our bigram model. We will start by refactoring our previous model and then add code from the transformer architecture piece by piece to see how it helps our model. ...

March 15, 2023 路 29 min 路 Gejun Zhu

Building My First dbt Project with DuckDB

Why I鈥檓 Learning dbt as an Analyst You can find the project on GitHub here: zhugejun/learn-dbt-by-building I鈥檝e been an Institutional Research Analyst in higher education for almost a decade. For my day-to-day job, I can wrangle enrollment with SQL, automate reports with R, build prediction models with Python, and visualize data with Power BI. There are times that I need to run queries to pull data from CAMS directly, download data from ZogoTech (our third-party OLAP vendor), save it as CSV, and load it to R for aggregation, visualization, and further analysis. Sometimes, I need to ingest the enrollment history data from National Student Clearinghouse (NSC) and combine it with data from multiple resources to create a superintendent report. ...

January 11, 2026 路 9 min 路 Gejun Zhu

Multilayer Perceptron (MLP)

In Part1, we learned how to build a neural network with one hidden layer to generate words. The model we built performed fairly well as we got the exact words generated based on counting. However, the bigram model suffers from the limitation that it assumes that each character only depends on its previous character. Suppose there is only one bigram starting with a particular character. In that case, the model will always generate the following character in that bigram, regardless of the context or the probability of other characters. This lack of context can lead to poor performance of bigram models. In this lecture, Andrej shows us how to build a multilayer neural network to improve the model performance. ...

March 13, 2023 路 13 min 路 Gejun Zhu

Bigram Character-level Language Model

This is a series of learning notes for the excellent online course Neural Networks: Zero to Hero created by Andrej Karpathy. The official Jupyter Notebook for this lecture is here. In this lecture, Andrej shows us two different approaches to generating characters. The first approach involves sampling characters based on a probability distribution, while the second uses a neural network built from scratch. Before we can generate characters using either approach, let鈥檚 prepare the data first. ...

March 4, 2023 路 13 min 路 Gejun Zhu