Resume - Paul Symons
The original source of this document is https://www.tind.au/resume/
Value Proposition
After 25 years in hands-on solution delivery, I know how to architect and build systems that create valuable information. Further, I know how to communicate effectively with others to encourage empathy, foster collaboration and harness SME knowledge to deliver meaningful results.
I engender trust by actively listening, digging deep on problems and prescribing outcomes instead of solutions. Making space for others to grow and develop whilst always remaining open to growth myself, I absorb the experience of others and respect the toil of what came before, to help define what will deliver future success.
I am an Australian Citizen and live near Perth, Western Australia.
Learn more about me on the TIND website
Education
Recent Career Highlights
- Designing and collaboratively building a Iceberg format data lake and serverless ingestion pipeline, at First Mode . Tested to receive 120M events per minute at peak, with ingestion latency ~165 seconds
- Creating, cultivating and leading a Data Practice at Mechanical Rock that contributed significantly to annual revenue
- Leading and coaching colleagues and customers in Data Architecture, Engineering, Platform Design and Data Transformation
- Launched 4 Customer Journeys on Snowflake with TIND and Mechanical Rock
- Presented at several Snowflake and Perth Data Engineering Community events, covering
Compentencies
A sample of tools, technologies and techniques I've developed experience in during my career:
Cloud
- AWS & Azure
- Automation
- Build Pipelines
- Containers
- Data & Analytics
- Key Management
- Linux
- NoSQL & Relational Databases
- Reliability Engineering
- IAM & Security
- Serverless
- Storage
- Streaming
Data
- Airflow
- CDC
- DataBricks
- dbt
- dbtCloud
- De-duplication
- ELT+ETL
- Fivetran
- AWS Glue
- Kimball Design
- Python
- Amazon RedShift & Athena
- Snowflake
- SQL
- Spark: pyspark, sparkUI
Software
- API Development
- Build Systems
- Clean Architecture
- Java / Kotlin
- Microservice Design
- Packaging & Artifacting Dependencies
- React + CSS
- Refactor & Re-platform
- Software Frameworks
- TDD+BDD
- Javascript, Typescript, Python, Java, Kotlin, HTML5
Previous Roles
-
Principal Consultant
2024 - presentPioneer Credit (13 weeks - subcontract) - Kickstarting the customer migration from on-premise to cloud
Key Focus Areas:
- Understanding the customer's tooling preferences, engineer experience, and aspirational outcomes
- Designing, documenting and collaboratively implementing Data Integration and Transformation workflows with Fivetran, Azure Data Factory and dbt core
- Analysing customer's governance requirements; design and implementation of security solutions to ensure RBAC flow through from Azure AD, and combination of tag-based column masking and row level security to safeguard sensitive data
- Continous collaboration and documentation to emboss the solutions in the customer consciousness
Data Vanguards (2 weeks - subcontract @ Resources client) - Solution Architecture and Analysis
Key Focus Areas:
- Evaluation of various Data Catalog Products for fit to customer requirements
- Solution Architecture self-service transformation; re-platform of existing Redshift cluster to Redshift Serverless with dbt cloud
Rent.com.au - Data Platform Maturity Assessment
Key Focus Areas:
- Interviews and Solution deep dives to understand current state, and pain points
- Production of assessment report with executive summary, technical assessment, and non-functional appraisal
-
Principal Data Architect
2022 - 2024Led and mentored a team of 2 other devops / data engineers to deliver a cloud data platform for a large volume of telemetry data: ingested from S3, loaded to Iceberg tables, transformed with dbt and visualised with Grafana
- Migrated from a single instance TimescaleDB to a S3 and Iceberg data lake using Athena and dbt
- Created a solution to translate 2000+ Grafana dashboards (and SQL) from TimescaleDB to Athena
- Used a off the shelf data loader to format, compress and ship PLC data to data lake
- Created a solution to generate dbt packages of models to transform 6,000+ telemetry signals into respective domain tables
Our key innovation was building a serverless iceberg ingestion pipeline that used Athena to load data with low latency, similar to the Tabular product that was acquired by Databricks in 2024.
The motivation for us was navigating around a few key constraints:
- The expense of Kinesis Data Firehose for landing data
- The cost and latency of running regular glue jobs, or cost of Kafka brokers / kinesis streams for use with Spark Streaming
- The over-reliance on AWS products, expecting to support multiple cloud platforms (Athena can be replaced with Trino)
-
Principal Consultant
2018 - 2022Completed 15 distinct engagements for 7 clients over 4 years, typically leading teams of 2-4 people.
Highlights include:
- Planned and implemented the integration and roll out of dbtCloud at a 'Big Australian' Resources client
- Workshops and demonstrations with several clients on the benefits and experience of working with dbt (core)
- Design and implementation of a serverless data integration between on-premise Oracle and Snowflake Data Warehouse, including schema auto-detection and migration, three years prior to Snowflake's INFER_SCHEMA feature release
- Assisting the cloud platform team at a security conscious insurance company onboard AWS Glue in a constrained environment
-
Tech Lead
2017 - 2018- Led a small team to deliver a simplified home loan credit decisioning solution using a mixture of vendor (Experian) and Java products
- Automated the packaging and delivering of vendor artifacts into JAR consumables to simplify integration
- Introduced bespoke JUnit integration testing of the Experian decisioning software platform, run as regression tests in the build pipeline
-
Java Developer
2011 - 2017Developed backend Java services and frontend HTML and Javascript experiences for Business Banking at Bankwest
Certifications
Reference Material
Case Studies
- Pioneer Credit - Snowflake Data Platform
- Bamboo - simple data platform and transformation with dbt
- AFG - Serverless Data Platform
Technical Blogs
→ TIND
Side Projects
- Evidence BI demo — Fuelwatch WA historic data
This demonstration site was created for presentation in a Perth Data Engineering Meetup, where the merits of DuckDB and Evidence.dev were advocated.
The presentation itself is also available:
- Evidence BI demo — AEMO Wholesale Electricity Market data
This demonstration site presents a small subset of available AEMO WEM data. The website is a static site created using Evidence.dev; scheduled jobs pull and transform raw AEMO data as it lands and convert it to partitioned parquet files on S3, using DuckDB.