- Development of Deep.BI platform backend features, mostly APIs & data flows
- Other technologies used in the project: Kafka, Flink, Hadoop, AWS, Ansible, H2O.ai, Java, Scala, R
- Level: Senior (more than 5 years of experience)
- Location: Warsaw, Poland
About Deep BI, Inc.
Deep.BI is a data platform for media companies. It can save up to 95% of money needed to build and maintain an in-house big data solution and years of development.
The data plays a fundamental role in every aspects of media including:
- new product development
- increasing audience engagement
- monetization (subscription, ads, branded content)
Deep.BI makes data collection, integration, storage, analytics and usage easy. It reduces all the complexity needed to implement big data technology and thus minimizes risk and cost. We use modern, real-time stack including Node.js, Kafka, Flink & Druid. We built our own HA, hybrid data cloud (now ~400 cores) and we're scaling it horizontally.
We also experiment with conversational user interface for our analytics platform, where customers get insights provided by chatbots. Also, as a next step we work on bot-2-bot communications to automate processes (RPA - Robotic Process Automation).
We're a young startup and yet a small team of enthusiasts, with solid financing from well known business angels having firsts big media customers from US and Europe.
We invite the best, passionate people. Let's talk and find out if there's a fit.
- Build internal APIs
- Build integrations with 3rd party solutions
- Designing, implementing and maintaining complex data flows, enrichments, ETL processes
- Deploy code in distributed / HA environment using Ansible
- Write large amounts of code, perform code reviews, write unit tests
- Perform extensive research and analysis to make optimal architecture and design decisions
- Write documentation
- Create quick proof-of-concept prototypes
- Participate in scrum team
- Mentor junior developers
Ideal candidate should be able to work also with Kafka, Druid, Flink and write Java / Scala code. Example tasks:
* integration with ad servers, programmatic platforms, DMPs * write Druid reindexing jobs on Hadoop * write extensions to Druid, debug & recompile Druid, contribute to Druid * participate in data science / artificial intelligence projects
- BS or MS in Computer Science or equivalent experience
- Experience in RESTful API integrations
- Advanced knowledge of NoSQL databases (e.g. MongoDB, Druid.io)
- Team player with strong communication skills
- Desire to learn fast and pick up latest and greatest technologies
- Result-oriented, able to deliver fast results, building sound prototypes and demos.
- Intellectual curiosity, along with excellent problem-solving skills, including the ability to disaggregate issues, identify root causes and recommend solutions
- Experience with media or advertising environment (programmatic, DSP, SSP, DMP, ad servers)
- Experience in building analytics / big data solutions
- Salary:10-18k PLN (different types of contract available) + paid holidays (20 or 26 days)
- Work in a young startup with solid financing, among passionate and friendly people
- Stock option plan
- Flexible working hours, possibility of occasional remote work
- Each member of the team has real influence on the product - state of the art big data & AI platform
- Great office location - beautiful co-work on Senatorska street
You believe there's a fit - apply now!
Any doubts - drop us an email: email@example.com