Big data is often used as a buzzword but despite this common usage of the term it is still poorly understood. This paper explores new methods of handling big data based on technological advancements in big data tools, cloud computing capabilities and file storage. This paper focuses on new tools such as Polars, Duckdb and Daft to determine how mature these tools are and whether they hold their promises. By using a busy environment consisting of a 4 core machine with 16GB of RAM, Duckdb and Daft were able to analyze and process datasets ranging from a few million rows to over a billion rows. While Polars fell behind as the data size grew, it proved formidable for data engineering tasks consisting of data ingestion, wrangling and I/O. With the ability of these tools to perform out of core computations on datasets many times larger than RAM and cloud vendors offering machines with thousands of cores and tens of terabytes of RAM on a single machine, analyzing and processing big data has never been more simplistic. Reduction in code complexity, environment complexity, compute costs and lowering the skill ceiling for data analytics and engineering problems are a few advantages when compared to distributed big data tools such as Apache Hadoop and Spark.