Bandora is an engineer company which is aware of climatic changes related with the way the humans are using the sources of energy. The main objective of Bandora is helps other companies to improve energy consumption and the maintenance of the systems that are installed inside of buildings. For this purpose, Bandora has a Data Science Department that use AI to solve real problems with the objective to save energy consumption. Due to the fast growing of the company along with the relevant institutions in USA, Bandora is searching for people that can incorporate the culture of the company and helps to improve energy efficient of the world.
We are looking for a Data Engineer with experience in Data Integration and in maintenance of big data platforms. For this purpose, you will support the data science team with your experience in design, develop, test and continually improving tools which deliver scalable and effective data infrastructure systems. Our systems receive real-time data to provide data visualization and recommendation systems for our clients. The objective is expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up.
· BS/MS Degree in Computer Engineering, Computer Science, Applied Math, or a similar area;
· Strong understanding of distributed computing principles and with distributed systems;
· Develop, deploy and maintain Big Data solutions that will ingest, process and store the necessary data ;
· Experience with NoSQL databases, such as MongoDB, Cassandra, and HBase;
· Experience with webservices/API;
· Experience working with Docker;
· Monitoring dataflows and underlying systems, promoting the necessary changes to ensure scalable, high performance solutions and assure data quality and availability;
· Support machine learning pipelines and tasks;
· Developing and maintaining an ETL pipeline;
· Fluent in English (most of our clients are abroad) and with excellent communication skills.
· Excellent communicator, common sense and a passion to solve problems;
· Be a team player;
· Good advisor about new approaches/methodologies, technologies and solutions;
· Eager to learn and like to share knowledge.
Nice to have
. Experience working with big data technologies (e.g., Hadoop, Spark, Hive, kafka, SparkStreaming, Apache Airflow Cloud services, AZURE, Google Cloud or AWS). Certifications in GCP is a plus;
· Experience developing production software is a bonus.
· Commitment to your development (we enjoy sharing and learning);
· Close relationship with business with high visibility and recognition;
· Work in a non-hierarchical environment;
· No working dress code.