|Location||New York City, NY|
This team provides the core infrastructure and service framework to manage, access and analyze current and historical financial datasets in a distributed environment. We work closely with multiple businesses to design and develop our systems that enable them to make better financial data driven products and decisions. The team deals with challenges of storage, low latency retrievals, high volume requests, scalability and high availability posed by the requirements of our varied applications. It is a mission critical system with 24x7x365 uptimes. We are currently engaged in adopting various big data technologies to manage our ever increasing amounts of data to leverage scalability, performance and cost efficiencies. The technologies being used include Hadoop, Hbase, IBM DB2, Kafka/RabbitMQ, Oozie, Storm, MapReduce and potentially others such as Flume, Solr, and LogStash.
We are seeking a self-motivated, talented programmer with an interest in solving data problems. In this role, you will participate in the design and implementation of various components and systems that are required to be highly efficient, robust and scalable. A successful candidate should have proven experience with the Hadoop stack and NoSQL data stores (preferably Hbase), experience working on critical infrastructure and desire to drive a product forward. The candidate must exhibit a passion for big data technologies and a flexible creative approach to problem solving.
- 5+ years of Java programming
- Experience with Hadoop/HDFS/MapReduce.
- Experience with Hbase/NoSQL or similar technology.
- Experience with Oozie, Zookeeper, Flume, Solr, ElasticSearch, Storm, or Spark is a plus.
- Experience with MQ technologies (Kafka, RabbitMQ) is a plus.
- C/C++ programming experience is a plus
- Strong problem-solving and communication skills and enjoys a collaborative enviroment
- Experience with enhancing and maintaining mission-critical software in a fast-paced environment is a plus but not necessary