Algorithms and data structures sound intimidating, especially when they are one of the most important parts of the interview process. I was also not very fond of them, until I tried out different studying methods and found one that works well.
In this blog post, I want to talk about BERT and its architecture from a practical perspective. To make this fun to read, I am gonna show some examples of multilingual BERT.
In this blog post, I am excited to share about my recent web app: VocabAssist. The app eases the pain for ESL students like me to look up each unfamiliar English words in an article. Once you specify the English text and your level in CEFR framework, difficult vocabulary will be auto highlighted with their meanings.
Recently, I met with a friend who actively interviewed for software engineer roles; I asked him: “was the coding interview hard?” He laughed: “The first company asked me a DFS question, and then the second company also asked about DFS, by then I already gained experience”. He was kind of lucky, but this makes me think: why would companies favor such a simple search algorithm over somthing more complicated, say, dynamic programming?
For the past few weeks, I tried to build a board game recommmender as my side project; However, I failed to beat the baseline model. Due to time limits, I had to abort this project and move on to other stuff. Netherlevess, I feel that my failure may help those who are new to recommendation system, and that is the purpose of this blog post. I will explain some techniques I (mis)used, and problems that might cause the undesirable performance.
Recently, I worked on a side project which allows me to perform sentiment classification in Chinese text. The program can classify any given sentence as either positive or negative.
My motivation of writing this very first blog at the last day of 2018 came from a casual conversation between me and my boyfriend. He said he would look back what he achieved for the last 12 months and felt proud, which reminded me of a Chinese meme