|
Project Description
Development of web applications using Free and OpenSource Software (FOSS) and provide system administration to support GMRT (Giant Meterwave Radio Telescope) observations and analysis. Responsibilities * Extensively working on developing web application for e-Tenders and Online Application management System for intake, short listing and Management of various academic programmes at NCRA-TIFR, Pune on a reusable framework using Java, J2EE, Struts2, Hibernate, Python, Shell Scripting, Ajax, JQuery and Bootstrap. * Working with various image processing libraries for building pipeline of automated creation of JPEG images from astronomical FITS images and integrating with GMRT Online Archives System for giving a rough idea to astronomers how the source looks like. * Working on further developing, maintaining and enhancing the capabilities of existing NCRA Proposal System and GMRT Online archiving web application. * Upgrading of NCRA’s new website based on top of Plone 5 which is configured according to the needs of NCRA-TIFR with required add-on packages and external Database like MySQL and integrated with LDAP authentication. * Working on System configuration, administration, automation of various backend processes using Python, Shell Scripting. * Providing maintenance of various implemented projects for all NCRA academic and scientific programs running on Linux servers. Some links: http://www.ncra.tifr.res.in http://trac.ncra.tifr.res.in/trac http://vsrp.ncra.tifr.res.in/VSRP2018/ http://inat.ncra.tifr.res.in/INAT2018/ http://raws.ncra.tifr.res.in/RAWS2018/ |
Worked as a Senior Software Developer at storageDNA, Pune from October 2012 - December 2015.
storageDNA provides end to end solution for archiving, restoring and media asset management of data that is not periodically used. StorageDNA’s workflow solution is built on Linear Tape Open (LTO) and Linear Tape File System (LTFS) technologies, & ODAs which allows streamlining file-based workflow, work more efficiently and save significant storage costs and this workflow also allows to easily archive, find, restore, and directly access digital assets at incredible speeds.
Project Description
Worked on further developing and enhancing the capabilities of storageDNA’s products on its core, user interface and other aspects and its maintenance. Responsibilities * Extensively worked on Java, J2EE, Apache Lucene, Struts2, Hibernate, XML, MySQL, Python, Shell Scripting, Ajax, JQuery and Bootstrap. * Worked on implementing the incremental Lucene Indexing for new Archival data. * Implemented the Global Search mechanism for DNAEvolution product that uses Lucene indexes for searching the keyword/logical sentence across the archives. * Worked on media catalog metadata analytics using Lucene’s Taxonomy index facet search and Hadoop ecosystem. Taxonomy index for archives was built by iterating over the analytics XML, which was generated after initial stages of data archiving on tapes and later metadata of it was fed to HDFS cache for various data processing outcomes and finally these results were analysed using MapReduce, Pig or Hive and finally fed to Lucene for indexing. This analytics XML had categories and sub-categories and these sub-categories of analytics data have further logical conditions and Regexp. By using this analytics XML, based on the categories and subcategories it returns the result back on querying. These categories and sub-categories help in drilling down the query to the required results. The whole catalog structure which contains lots of archives can be iterated by these analytics XMLs for drilling down to our required results which are configured for different archives which will be parsed according to the rules defined in them. These analytics XMLs were placed in the watch folder ready for processing; which is configured in the configuration file for processing. * Integration of the flowplayer with low-end HTML5 video API calls for seeking, frame based manipulation and rendering. * Worked on building the Hadoop Cluster to implement analytics, derived from parsing the structured and unstructured archival data using PIG for building analytics of various parameters. * Worked with various image processing libraries for automated detection of hot accel boxes and text recognition for Indian Railways project. * Worked with various opensource libraries like vmtouch, udev, apache tika, unoconv, mxflib, videodumper, ffmpeg, ffmbc, OpenCV, aubio, tessarect, ExifTool etc for automating various backend processes using Python and Shell Scripting. |
Worked as a Visiting Engineer (Software) at National Centre for Radio Astrophysics – Tata Institute of Fundamental Research, Pune from December 2009 to August 2012.
- National Centre for Radio Astrophysics – Tata Institute of Fundamental Research, Pune is a leading centre of research under the umbrella of Department of Atomic Energy, Government of India, has an active research program in Radio Astrophysics. It has built & operates the Giant Metrewave Radio Telescope, which is located near Pune, and is the most powerful radio telescope in the world at low radio frequencies.
Involved in designing and developing software systems to cater to diverse needs in the development of NCRA new website, Online application form and management system on a reusable framework to support the GMRT (Giant Meterwave Radio Telescope) observations and analysis Using programming languages such as Java, JSP, JDBC, Struts2, Hibernate, MySQL, Shell Scripting, Python, Scheme and Linux, and Opensource tools like Plone, Zope, LDAP, Subversion, Trac, Mailman. Also, worked on System configuration, administration and maintenance of various Linux tools for all NCRA academic and scientific programs on Linux server.
Project Description Development of software using FOSS and provide system administration to support GMRT (Giant Meterwave Radio Telescope) observations and analysis. Responsibilities * Development of Online Application Management System for intake, short listing and Management of various academic programmes at NCRA-TIFR, Pune on a reusable framework. * Building of NCRA’s new website based on top of Plone that runs on Zope (Z Object Publishing Environment) application server which is configured according to the needs of NCRA-TIFR with required add-on packages and external Database like MySQL and integrated with LDAP authentication server for users to have Single Sign On facility on all the applications. * Worked on System configuration, administration and maintenance of various Linux tools and libraries for all NCRA academic and scientific programs on Linux server. * Maintenance of NCRA Proposal System and GMRT Online archiving web application. Some links: http://www.ncra.tifr.res.in http://trac.ncra.tifr.res.in/trac https://naps.ncra.tifr.res.in https://naps.ncra.tifr.res.in http://mutha.ncra.tifr.res.in:8080/INAT2015/ http://mutha.ncra.tifr.res.in:8081/mailman/listinfo |
Worked with LG Electronics India Pvt. Ltd. Noida, OEM E-Durables (Manufacturing Plant of LG in Dehradun) from June 2007 to June 2008 as a Quality Engineer.
Job Profile:
Making the Quality infrastructure and provide the Quality training to Quality manpower. Looking Independently the Various activities of IQC, Incoming & Outgoing Sample Inspection according to lot size as per AQL (Acceptable Quality Level). Maintaining the Product Reliability through OQC & ELT test. Analyzing the Rejection of Line through RPS, FRP, 6Tool and Innovations. Six Sigma Registration on Major Issues for Minimizing the Problem. Attending meeting with Production department to discuss Problems on daily basis and Vendor development. Analyzing SCR & FQR Rejection trends and taking countermeasures on line for their improvement, through 6Tool audit, Line Process Audit, ESD audit, 3F audit, 360° audit, Q System Audit and Documentation audit. Managing ECN, WI, QC Flow chart and Circuit diagram. Maintaining & Presenting Daily, Weekly, Monthly, and SCR (Service Complain Report) Analysis Reports and to maintain the CSS Room. |