Remote Back-end Node.JS Developer at TRENDSPIDER
TrendSpider is a professional tool for those who trade anything from stocks to cryptos. We provide smart retail traders, financial advisors and hedge funds with the market research and algo trading platform. We need a Node.js engineer who’s going to primarily deal with data feeds (market data and alternative data).
- Connecting new data types and maintaining existing pipelines
- Improving observability in data import pipelines
- Working on business logic which is necessary for data to work in charting, scanning, backtesting, custom JS scripting etc.
Here are the skills required in order to do the job well:
- Experience establishing cloud infrastructure via Terraform. AWS Lambda + SQS is a bare minimum.
- Strong expertise in Node.js. At the very least, you need experience profiling, debugging a memory leak and an event loop jam in a Node.js service. Some services handle tens of thousands of messages per second, some pipelines handle gigabytes of data with limited RAM, that’s why.
- Experience working with PostgreSQL and MongoDB with no ORMs.
- Experience dealing with massive data (both “read a lot” and “write a lot”) in PostgreSQL.
- Strong understanding of ideas of both OOP and functional programming. We use both, depending on a component and a goal.
- Capability for testing your work well. Both writing automated tests (units, function, e2e) and running checks by doing what customers do.
- Capability for figuring things out. All data vendors are different, all APIs are different and every kind of data has its own scale.
- Capability for building simple solutions for complex problems. Simplicity of solutions is the king here.
- Hands-on experience trading or designing strategies will be a significant bonus.
You’ll be dealing with a microservice architecture (30+ microservices) where the vast majority of services run on Node.js. Microservices either provide HTTP REST APIs using Express and Fastify (no Next.js, no GraphQL) or use Websocket. Cloud infrastructure is AWS, CI/CD pipelines are on Gitlab and Bitbucket. Container orchestration is K8s and Docker Swarm. Data pipelines mostly run on AWS Lambda.
Code is all ECMAScript. Quality of code is high, but in some data pipelines it’s rather acceptable. There are no variables named “x”. There’s a coding convention. Overall, we value solving problems and delivering value to our customers way above using this new fancy technology just for the purpose of using it.
The workflow is a lightweight mix of scrum and kanban, with a low level of formality. There’s a QA team, but engineers test their work thoroughly too. You’ll be supervised in the beginning. As soon as we get on the same page regarding values in engineering, you’ll be having less and less supervision, until you get to a point when you will make most of the decisions yourself. The team is all remote, from Argentina to Ukraine. Languages are English and Russian. Speaking Russian would be a bonus, but not a must have.
The hiring process is short and straightforward. First, a few emails, then one interview call with 1-2 people and that’s it.
While we expect you to primarily deal with the data pipelines, this position offers you a number of directions for growth. You will have the opportunity to work with highly loaded components (i.e., real time market data firehose intake) and fairly sophisticated architecture. You will also have a chance to work with active traders and learn about markets and trading as a whole, backtesting and algorithmic trading. You’ll be able to work on business logic related to algo trading if you’re interested and capable. In general, if you’re interested in trading and want to dive deep, then it’s the right place for you.
Employers will see your profile when they are sending a job in your skill.
Create Your Profile (simple)