Then let me introduce myself. My name is Nick Brett. Based in New York, I'm a builder, a product manager and an expert in leading teams to harness the data that drives markets.
Over nearly 20 years, I've worked in most of the key functions for fintech data, from managing global data partner relationships at Bloomberg LP, to building the data systems to deliver massive scale analytics at Meta. I've managed worldwide organizations, with teams from Tokyo to New York, Dubai to Sydney. And taken high impact individual contributor roles to be the hands-on leader for new initiatives. For most of my career I've worked as an Engineer, but for 3-years I ran a Product Management team, and have a passion for marrying the skills to build the right solutions with those needed to solve the right problems.
I've learned a lot over that time about the systems and products, processes and people necessary to build a data factory that can power buy-side and sell-side businesses. From capturing metadata to prevent look-ahead-bias for alpha modeling, to tracing data lineage across heterogenous sytems for data governance and operating a scaled global data onboarding operation. I believe many data problems sit at the intersection of technology, product and people, and benefit from the multi-disciplinary approach that I bring.
Currently I'm working at Two Sigma in the Data Engineering department with a focus on data governance, data lineage and data cataloging solutions for the more than 144 Petabytes of data that the firm stores.
Here you'll find details on personal projects, including code, a resume and links to my articles. Do reach out if you'd like to talk.
As a 'data person', building a website using modern front-end technologies and patterns was a challenge. Here I share my approach and lessons learned.
A machine learning system that uses United States flight data to predict whether a delay is likely for upcoming flights.
Pieces of insights, and lessons learned, from my adventures in Product Management.
Recommendations on the tools I use as an Engineer, Product Manager and Data Person.
Here you'll find all of my code. Feel free to embrace and extend.
A collection of articles focusing on things I've learned as a Software Engineering Manager.
Where I've worked, what I've built, and the impact I've had.
I'm currently focused on various strategic initiatives across the Two Sigma Data Engineering department.
This includes work to build systems that can provide full reproducibility of data over time, new data storage architectures to unlock research opportunities and driving complex external partnerships.
This work combines a mix of deep technical data expertise, strong product-sense and an ability to collaborate widely across the organization.
Expanded scope to lead a second separate cross-functional team "Differentiated Data Engineering" of Data Engineers, Data Scientists and Program Managers.
The team built and leveraged LLM technology to create differentiated proprietary datasets. Responsibilities encompassed three efforts; building tools that leverage LLM technology that can be used to deliver new sources of alpha, establishing systems and processes that allow the scalable validation and tuning of LLM's with human input, and building datasets with the team's LLM tools that themselves deliver alpha.
Led a team of Data Engineers responsible for Two Sigma's Data Platform, a suite of tools and services that enable Alpha Modelers and other internal customers to discover, share and seamlessly use Two Sigma's vast catalog of vendor supplied, proprietary and derived datasets. Key products include a 'Feature Catalog', that organizes, indexes and permissions economically valuable time-series data (features), and a data lineage system for tracing all data flows across Two Sigma.
Key achievements included establishing a process and mechanism to provide data lineage insights that delivered use-cases around risk management, support cost reduction and model attribution in a way that did not disrupt modeler workflow or require a rebuild of the existing architecture. Also developing systems to allow data upgrades to occur seamlessly, with alpha models migrated to new data sets with no disruption and minimal effort.
Velma was a VC-backed early stage company using AI to make the process of planning and running large software projects more transparent, efficient and agile. Advised the CEO and executive team on company strategy, fundraising, market positioning and as a general sounding board for hard problems.
Responsible for multiple teams building the technology platform for the data generated by Messenger’s billions of users that processes and stores actionable data to be used by dozens of Meta’s teams for insight into customer usage and market trends. A key focus was ensuring that data was handled in a fundamentally private way by building a closed form data lineage system that allowed the company to make strong guarantees about data usage to comply with regulations around the world (e.g. EU Privacy Directive) and build greater user trust.
Participated in bi-weekly 1 hour meetings of 'councils' of like minded executives. Provided and received coaching on a range of challenges, from helping others scale their business, deal with role changes and identify opportunities for professional growth.
Led an 80+ global team of product managers, data engineers and customer success managers that grew the marketplace for Bloomberg’s 4000+ data partners. Improving data usage and reducing Bloomberg customer churn by enabling partners to spend less time enabling their customers for data (from weeks to <5 days per request), while also improving speed of historical data updates from multi-days to <24 hours.
With a team of 6 product managers and 2 data engineers grew the business by reducing the time to on-board new contributed content from months to weeks, removed interruptions in data service due to customer firm mergers, and reduced data processing errors from 3% of content to <0.1%. Launched a new Alternative Data business by reducing on-board time for vendors from weeks to days by developing a new data pipeline system on a Python micro-services no-SQL architecture.
Worked as a senior IC on an exciting new initiative to reduce the time to onboard and normalize new market data feeds that historically relied on C++ programmers and bespoke transformation logic. Personally designed and built a functional prototype for a suite of Node.js / React tools that allowed data analysts with only limited Python expertise to define transformation rules in a WYSIWYG environment for market data. These rules were then used to both generate code directly for handling market data, but also to drive test plans and high quality specification documents for C++ engineers to develop modules more efficiently that could not be cost effectively generated.
This prototype led to a seven figure funding plan for a team of Engineers to build out a fully fledged product offering that as of Oct 2020 had been deployed across ~6 exchanges with reductions in both time to market and maintenance costs of their respective market data feeds.
Engineering manager for 5 teams (30 reports) in London and New York building a new product (DASH), for equity salespeople. Developed in C++ and JavaScript, the software offers a matching algorithm to combine data and find opportunities to save users time; achieved over 3000+ daily active users over 18 months. Secured funding for 10 developers after pitching to Bloomberg LP CEO a new business intelligence platform for sales managers based around a personally coded C++ prototype.
Manager of 4 varied teams in 3 countries with 22 reports that built growing products, from batch-based trade analysis reporting software (BTCA) that achieved >3 million USD revenue growth, to realtime VWAP price calculation engines in C++ that processed thousands of market events per second.
Grew usage from 3,000 to 40,000 active users over 2 years by leading a team of 8 developers building a suite of applications for realtime market analysis.
Developed a market data alert system using C/C++ in a distributed UNIX environment that delivered 3000 to 4000 alerts a second to users with 24/7 availability.
Library House was a buyside data research company focused on providing transparency on private high growth potential companies across Europe. Gathered data on companies by interviewing on average 10 CEOs a week, analyzing company filings and preparing short reports for clients.