How do you analyze data from a machine tool? Standards like MTConnect help in retrieving the data, but how do you go about applying analytical and statistical algorithms on the data? Tools like the R statistical package can greatly help here – they come with a wide array of libraries that can be applied out-of-the-box for statistical analysis. But this still leaves us with figuring out how to bring MTConnect data into R – enter mtconnectR. 

mtconnectR can read in data from an MTConnect Agent, parse the probe xml, and store the data in a form appropriate for analysis using existing R libraries. In this blog post we give an example of using the mtconnectR Package. 

Data Set Credits

For a real life working example, we have a dataset graciously provided to us by the National Institute of Standards and Technology ( for one of their test parts. We will be trying to solve a real problem faced by the NIST researchers. As we walk through each step of the exploratory process, you will understand how you can use such techniques to solve similar issues that you might have with your machine tools.

Problem Statement

Accurately estimate the productive time of a part from the data of a part that was manufactured with interruptions and tool path inefficiencies. We define productive time as the time taken by the machine to complete the part excluding interruptions and inefficiencies.

Reading Data into R

Let us specify the data files that we are going to be working with. Example data from the MTConnect Agent samples (Delimited log file) and the result of the MTConnect probe (Devices XML) are provided along with the package. Note that the package can read in the log file even if it is compressed.

file_path_dmtcd = "../data/delimited_mtc_data/nist_test_bed/GF_Agie.tar.gz"
file_path_xml = "../data/delimited_mtc_data/nist_test_bed/Devices.xml"

Before we read in the data into the MTC Device Class, it might help us a bit in understanding about the data that we have.

Devices XML Data


The MTConnect Devices XML document has information about the logical components of one or more devices. This file can obtained using the probe request from an MTConnect Agent.

We can check out the devices for which the info is present in the Devices XML using the get_device_info_from_xmlfunction. From the device info, we can select the name of the device that we want to analyse further. 

(device_info = get_device_info_from_xml(file_path_xml))
##                      name                           uuid           id
## 1 nist_testbed_Mazak_QT_1 nist_testbed_Mazak_QT_1_74fd52 Mazak_QT_1_1
## 2  nist_testbed_GF_Agie_1  nist_testbed_GF_Agie_1_3a0e8a GF_Agie_1_78
device_name = device_info$name[2]


The get_xpath_from_xml function can read in the xpath info for a single device into a easily read data.frame format.

The data.frame contains the id and name of each data item and the xpath along with the type, category and subType of the data_item. It is easy to find out the data items of a particular type using this function. 

xpath_info = get_xpaths_from_xml(file_path_xml, device_name)
##        id      name           type  category subType
## 1 dtop_79     avail   AVAILABILITY     EVENT    <NA>
## 2 dtop_80     estop EMERGENCY_STOP     EVENT    <NA>
## 3 dtop_81    system         SYSTEM CONDITION    <NA>
## 4    X_84 Xposition       POSITION    SAMPLE  ACTUAL
## 5    Y_86 Yposition       POSITION    SAMPLE  ACTUAL
## 6    Z_88 Zposition       POSITION    SAMPLE  ACTUAL
##                                                       xpath
## 1        nist_testbed_GF_Agie_1<Device>:avail<AVAILABILITY>
## 2      nist_testbed_GF_Agie_1<Device>:estop<EMERGENCY_STOP>
## 3             nist_testbed_GF_Agie_1<Device>:system<SYSTEM>
## 4 nist_testbed_GF_Agie_1<Device>:Xposition<POSITION-ACTUAL>
## 5 nist_testbed_GF_Agie_1<Device>:Yposition<POSITION-ACTUAL>
## 6 nist_testbed_GF_Agie_1<Device>:Zposition<POSITION-ACTUAL>

Getting Sample data by parsing MTConnectStreams data

MTConnectStreams data from an MTConnect Agent can be collected using a ruby script to generate a delimited log of device data (referred to in this document as log data) which is then used by the mtconnectR Package.

Creating MTC Device Class

create_mtc_device_from_dmtcd function can read in the Delimited MTConnect data (DMTCD) and the XML data for a device and combine it into a single MTCDevice Class with the data organized separately for each data item.

mtc_device = create_mtc_device_from_dmtcd(file_path_dmtcd, file_path_xml, device_name)
##  [1] "nist_testbed_GF_Agie_1<Device>:Aposition<ANGLE-ACTUAL>"     
##  [2] "nist_testbed_GF_Agie_1<Device>:avail<AVAILABILITY>"         
##  [3] "nist_testbed_GF_Agie_1<Device>:Cposition<ANGLE-ACTUAL>"     
##  [4] "nist_testbed_GF_Agie_1<Device>:estop<EMERGENCY_STOP>"       
##  [5] "nist_testbed_GF_Agie_1<Device>:execution<EXECUTION>"        
##  [6] "nist_testbed_GF_Agie_1<Device>:Fovr<PATH_FEEDRATE-OVERRIDE>"
##  [7] "nist_testbed_GF_Agie_1<Device>:line<LINE>"                  
##  [8] "nist_testbed_GF_Agie_1<Device>:mode<CONTROLLER_MODE>"       
##  [9] "nist_testbed_GF_Agie_1<Device>:move<x:MOTION>"              
## [10] "nist_testbed_GF_Agie_1<Device>:program<PROGRAM>"            
## [11] "nist_testbed_GF_Agie_1<Device>:Sovr<SPINDLE_SPEED-OVERRIDE>"
## [12] "nist_testbed_GF_Agie_1<Device>:Xposition<POSITION-ACTUAL>"  
## [13] "nist_testbed_GF_Agie_1<Device>:Yposition<POSITION-ACTUAL>"  
## [14] "nist_testbed_GF_Agie_1<Device>:Zposition<POSITION-ACTUAL>"  


Exploring different data items

It looks like we have the position data items that we might need for this analysis in the log data. Let's see the variation in position. We can plot all the data items in one plot using ggplot2.

Plotting the data

xpos_data = getDataItem(mtc_device, "Xposition") %>% getData()
ypos_data = getDataItem(mtc_device, "Yposition") %>% getData()
zpos_data = getDataItem(mtc_device, "Zposition") %>% getData()

ggplot() + geom_line(data = xpos_data, aes(x = timestamp, y = value))
ggplot() + geom_line(data = ypos_data, aes(x = timestamp, y = value))
ggplot() + geom_line(data = zpos_data, aes(x = timestamp, y = value))

Merging different data items for simultaneous analysis

It looks like the machine is going back and forth quite often, across all the axes. We also don't know how this traversal varies across different axes. However, we can get a much better idea of the motion if we could plot motion on one axis against the other. For that we have to merge the different data items. Since the different data items have different timestamp values as the key, it is not as straightforward as doing a join of one data item against the other. For this purpose, the mtconnectR packge has a merge method defined for the MTCDevice Class.

merged_pos_data = merge(mtc_device, "position") # merge all dataitems with the word position
##                    timestamp
## 1 2015-11-02 14:58:49.994391
## 2 2015-11-02 14:59:03.742392
## 3 2015-11-02 14:59:03.886408
## 4 2015-11-02 14:59:14.122334
## 5 2015-11-02 14:59:14.270387
## 6 2015-11-02 14:59:22.486440
##   nist_testbed_GF_Agie_1<Device>:Aposition<ANGLE-ACTUAL>
## 1                                                -0.0001
## 2                                                -0.0001
## 3                                                -0.0001
## 4                                                -0.0001
## 5                                                -0.0001
## 6                                                -0.0001
##   nist_testbed_GF_Agie_1<Device>:Cposition<ANGLE-ACTUAL>
## 1                                                 0.0278
## 2                                                 0.0278
## 3                                                 0.0278
## 4                                                 0.0278
## 5                                                 0.0278
## 6                                                 0.0278
##   nist_testbed_GF_Agie_1<Device>:Xposition<POSITION-ACTUAL>
## 1                                                  33.69547
## 2                                                  33.69548
## 3                                                  33.69547
## 4                                                  33.69548
## 5                                                  33.69547
## 6                                                  33.69548
##   nist_testbed_GF_Agie_1<Device>:Yposition<POSITION-ACTUAL>
## 1                                                 -38.69783
## 2                                                 -38.69783
## 3                                                 -38.69783
## 4                                                 -38.69783
## 5                                                 -38.69783
## 6                                                 -38.69784
##   nist_testbed_GF_Agie_1<Device>:Zposition<POSITION-ACTUAL>
## 1                                                  20.37543
## 2                                                  20.37543
## 3                                                  20.37543
## 4                                                  20.37543
## 5                                                  20.37543
## 6                                                  20.37543

Oops. Looks like we have also merged in the angular position. Let's try a more directed merge. Also, the names of the data items have the full xpaths attached to them. While this might be useful in other circumstances to get the hierarchical position of the data, we can dispense with it now using the extract_param_from_xpath function. Let's view the data after that. 

merged_pos_data = merge(mtc_device, "position<POSITION-ACTUAL") # merge all dataitems with the word position
names(merged_pos_data) = extract_param_from_xpath(names(merged_pos_data), param = "DIName", show_warnings = F)
##                    timestamp Xposition Yposition Zposition
## 1 2015-11-02 14:58:49.994391  33.69547 -38.69783  20.37543
## 2 2015-11-02 14:59:03.742392  33.69548 -38.69783  20.37543
## 3 2015-11-02 14:59:03.886408  33.69547 -38.69783  20.37543
## 4 2015-11-02 14:59:14.122334  33.69548 -38.69783  20.37543
## 5 2015-11-02 14:59:14.270387  33.69547 -38.69783  20.37543
## 6 2015-11-02 14:59:22.486440  33.69548 -38.69784  20.37543

Much better. Now let's plot the data items in one go.

ggplot(data = merged_pos_data, aes(x = timestamp)) +
geom_line(aes(y = Xposition, col = 'Xpos')) +
geom_line(aes(y = Yposition, col = 'Ypos')) +
geom_line(aes(y = Zposition, col = 'Zpos')) +
theme(legend.title = element_blank())

It does look the sudden traversals are simultaneous across the axes. Plotting one axes against the other leads to the same conclusion. It also gives us an idea of the different representations of the part.

ggplot(data = merged_pos_data, aes(x = Xposition, y = Yposition)) + geom_path()
ggplot(data = merged_pos_data, aes(x = Xposition, y = Zposition)) + geom_path()
ggplot(data = merged_pos_data, aes(x = Zposition, y = Yposition)) + geom_path()

So the machine tool is going to the origin every so often.

Deriving new process parameters

It might help our analysis to also calculate a few process parameters that the machine tool is not providing directly. Here we are going to calculate the actual path feedrate of the machine as it executes the process using the position data.

Derived Path Feedrate

Path feedrate can be calculated as the rate of change of the position values. Here, we must use the 3-dimensional distance value and not just one of the position vectors.

PFR = Total Distance / Total Time = Sqrt (Sum of Squares of distance along individual axis) / time taken for motion

position_change_3d = 
((lead(merged_pos_data$Xposition, 1) - merged_pos_data$Xposition) ^ 2 +
(lead(merged_pos_data$Yposition, 1) - merged_pos_data$Yposition) ^ 2 +
(lead(merged_pos_data$Zposition, 1) - merged_pos_data$Zposition) ^ 2 ) ^ 0.5

merged_pos_data$time_taken = 
lead(as.numeric(merged_pos_data$timestamp), 1) - as.numeric(merged_pos_data$timestamp)

merged_pos_data$pfr = round(position_change_3d / merged_pos_data$time_taken, 4)

dt.df <- melt(merged_pos_data, measure.vars = c("pfr", "Xposition", "Yposition"))
ggplot(dt.df, aes(x = timestamp, y = value)) +
geom_line(aes(color = variable)) +
facet_grid(variable ~ ., scales = "free_y") 
## Warning: Removed 1 rows containing missing values (geom_path).
ggplot(data = merged_pos_data, aes(x = timestamp)) + 
geom_step(aes(y = pfr)) +
geom_step(aes(y = Xposition)) 
## Warning: Removed 1 rows containing missing values (geom_path).

Let's add this derived data back into the MTCDevice Class.

pfr_data = merged_pos_data %>% select(timestamp, value = pfr) # Structuring data correctly
mtc_device = add_data_item_to_mtc_device(mtc_device, pfr_data, data_item_name = "pfr<PATH_FEEDRATE>",
 data_item_type = "Sample", source_type = "calculated")
##  [1] "nist_testbed_GF_Agie_1<Device>:Aposition<ANGLE-ACTUAL>"     
##  [2] "nist_testbed_GF_Agie_1<Device>:avail<AVAILABILITY>"         
##  [3] "nist_testbed_GF_Agie_1<Device>:Cposition<ANGLE-ACTUAL>"     
##  [4] "nist_testbed_GF_Agie_1<Device>:estop<EMERGENCY_STOP>"       
##  [5] "nist_testbed_GF_Agie_1<Device>:execution<EXECUTION>"        
##  [6] "nist_testbed_GF_Agie_1<Device>:Fovr<PATH_FEEDRATE-OVERRIDE>"
##  [7] "nist_testbed_GF_Agie_1<Device>:line<LINE>"                  
##  [8] "nist_testbed_GF_Agie_1<Device>:mode<CONTROLLER_MODE>"       
##  [9] "nist_testbed_GF_Agie_1<Device>:move<x:MOTION>"              
## [10] "nist_testbed_GF_Agie_1<Device>:program<PROGRAM>"            
## [11] "nist_testbed_GF_Agie_1<Device>:Sovr<SPINDLE_SPEED-OVERRIDE>"
## [12] "nist_testbed_GF_Agie_1<Device>:Xposition<POSITION-ACTUAL>"  
## [13] "nist_testbed_GF_Agie_1<Device>:Yposition<POSITION-ACTUAL>"  
## [14] "nist_testbed_GF_Agie_1<Device>:Zposition<POSITION-ACTUAL>"  
## [15] "pfr<PATH_FEEDRATE>"

Identifying Inefficiencies

Idle times

Our first task is to identify the periods when the machine was idle. For this we can use a few approaches.

  • Find out the times when the execution status was not active OR
  • Find out the times when the machine was not feeding (PFR~0) OR
  • Find the periods when the feed override was zero

We will be trying out all the approaches and choosing union of the three as the period when machine is idle.

# Getting all the relevant data
merged_data = merge(mtc_device, "EXECUTION|PATH_FEEDRATE|POSITION")
names(merged_data) = extract_param_from_xpath(names(merged_data), param = "DIName", show_warnings = F)

merged_data = merged_data %>% 
mutate(exec_idle = F, feed_idle = F, override_idle = F) %>% # Setting everything false by default
mutate(exec_idle = replace(exec_idle, !(execution %in% "ACTIVE"), TRUE)) %>% 
mutate(feed_idle = replace(feed_idle, pfr < 0.01, TRUE)) %>% 
mutate(override_idle = replace(override_idle, Fovr < 1, TRUE)) %>% 
mutate(machine_idle = as.logical(exec_idle + feed_idle + override_idle))
##                    timestamp execution   Fovr Xposition Yposition
## 1 2015-11-02 14:58:49.990541      <NA> 111.25        NA        NA
## 2 2015-11-02 14:58:49.994391      <NA> 111.25  33.69547 -38.69783
## 3 2015-11-02 14:59:03.742392      <NA> 111.25  33.69548 -38.69783
## 4 2015-11-02 14:59:03.886408      <NA> 111.25  33.69547 -38.69783
## 5 2015-11-02 14:59:14.122334      <NA> 111.25  33.69548 -38.69783
## 6 2015-11-02 14:59:14.270387      <NA> 111.25  33.69547 -38.69783
##   Zposition    pfr exec_idle feed_idle override_idle machine_idle
## 1        NA     NA      TRUE     FALSE         FALSE         TRUE
## 2  20.37543 0.0000      TRUE      TRUE         FALSE         TRUE
## 3  20.37543 0.0001      TRUE      TRUE         FALSE         TRUE
## 4  20.37543 0.0000      TRUE      TRUE         FALSE         TRUE
## 5  20.37543 0.0001      TRUE      TRUE         FALSE         TRUE
## 6  20.37543 0.0000      TRUE      TRUE         FALSE         TRUE

Machine tool at origin

We need to identify the time spent by the machine at origin. Let's look at the X - Y graph again.

ggplot(data = merged_pos_data, aes(x = Xposition, y = Yposition)) + geom_path()

It is clear that the periods when the machine was at origin are roughly X > 30, Y < -30. Adding this into the mix:

merged_data_final = merged_data %>% 
mutate(at_origin = F) %>% # Setting everything false by default
mutate(at_origin = replace(at_origin, Xposition > 30 & Yposition < -30, TRUE)) %>% 
select(timestamp, machine_idle, at_origin)
##                    timestamp machine_idle at_origin
## 1 2015-11-02 14:58:49.990541         TRUE     FALSE
## 2 2015-11-02 14:58:49.994391         TRUE      TRUE
## 3 2015-11-02 14:59:03.742392         TRUE      TRUE
## 4 2015-11-02 14:59:03.886408         TRUE      TRUE
## 5 2015-11-02 14:59:14.122334         TRUE      TRUE
## 6 2015-11-02 14:59:14.270387         TRUE      TRUE

Calculating Summary Statistics

Now we have all the data at our disposal to calculate the time statistics. First we need to convert the time series into interval format to get the durations. We can use convert_ts_to_interval function to do the same.

merged_data_intervals = convert_ts_to_interval(merged_data_final)
##                        start                        end duration
## 1 2015-11-02 14:58:49.990541 2015-11-02 09:58:49.994391     0.00
## 2 2015-11-02 14:58:49.994391 2015-11-02 09:59:03.742392    13.75
## 3 2015-11-02 14:59:03.742392 2015-11-02 09:59:03.886408     0.14
## 4 2015-11-02 14:59:03.886408 2015-11-02 09:59:14.122334    10.24
## 5 2015-11-02 14:59:14.122334 2015-11-02 09:59:14.270387     0.15
## 6 2015-11-02 14:59:14.270387 2015-11-02 09:59:22.486440     8.22
##   machine_idle at_origin
## 1         TRUE     FALSE
## 2         TRUE      TRUE
## 3         TRUE      TRUE
## 4         TRUE      TRUE
## 5         TRUE      TRUE
## 6         TRUE      TRUE

Now we can aggregate across the different states to find the total amount of time in each state.

time_summary = merged_data_intervals %>% group_by(machine_idle, at_origin) %>% 
summarise(total_time = sum(duration, na.rm = T))

## Source: local data frame [4 x 3]
## Groups: machine_idle [?]
##   machine_idle at_origin total_time
##          (lgl)     (lgl)      (dbl)
## 1        FALSE     FALSE    2713.77
## 2        FALSE      TRUE      21.99
## 3         TRUE     FALSE    3339.20
## 4         TRUE      TRUE     576.95
total_time = sum(time_summary$total_time)
efficient_time = sum(time_summary$total_time[1])
inefficient_time = sum(time_summary$total_time[2:4])
interrupted_time = sum(time_summary$total_time[3:4])
time_at_origin = sum(time_summary$total_time[c(2,4)])
## [1] "Results"
## [1] "Total Time of Operation (including interruptions) = 6651.91s"
## [1] "Total Time without identified inefficiencies = 2713.77s"
## [1] "Total Time wasted due to interruptions = 3916.15s"
## [1] "Total Time wasted due to being at origin = 598.94s"

With this analysis we see that the machine was in a state of inefficiency for more than half the operating time.

The package also has a few convenience functions, not detailed here, that should facilitate many common tasks required for such analysis. In addition, we also provide a few helper scripts to facilitate automating such analysis. We also intend to support further with more and more capabilities. Please raise any bugs/feature requests at -

Meetup on Predictive Analytics and the Industrial Internet of Manufacturing Things

William Sobel, Chief Strategy Officer at System Insights and Chief Architect/Chair TSC at MTConnect Institute, will be giving a presentation titled “Predictive Analytics and the Industrial Internet of Manufacturing Things” on April 7th at The IoT Inc Business Meetup. Click here to RSVP. If you can't make it in person, watch the Meetup and participate live online here


The Industrial Internet of Things has been hyped to take manufacturing into a new era; the German Industrie 4.0 initiative, NNMI in the US and 2025 goals in China are all aligned on the target of agile and smart manufacturing. Our current manufacturing systems have not changed much in the last 20 years and we are still using paper and pencil in many of our processes. There are many advanced technologies we can bring to bear today to help us along that path, but we still need to build the foundations to enable these advancements. Manufacturing requires special consideration for an IIOT system; an approach that does not take into consideration the context of the manufacturing process will not be able to transform the data from the equipment and sensors into actionable information. The solution is to build a standards based interoperable platform that allows services to fuse semantic data from multiple sources to provide the foundation for accelerated innovation in smart manufacturing. Will Sobel will discuss how this is a model for the new product and services to come and how this will enable outcome and intent based self-aware manufacturing systems.


MTConnect Workshop at the American Manufacturing Summit 2016

The American Manufacturing Summit 2016 was conducted on February 29th & March 1st, 2016 at Chicago. 

William Sobel, Founder and Chief Strategy Officer of System Insights, presented at the Manu Summit in Chicago with Moneer Helu from NIST at the invitation of nMetrix to talk about the MTConnect Standard and the foundations of the digital thread. The workshop was titled "The Connected Factory: How MTConnect Will Enable Next Generation Levels of Productivity and Accuracy in Manufacturing Software". He discussed how semantic standards and data from manufacturing equipment will enable new services that will dramatically increase the productivity of manufacturing processes. Will elaborated on the role of MTConnect in the connected factory. The talk illustrated how MTConnect in the connected factory will further improve on-time completions, factory productivity, production planning and costing accuracy. He also spoke about current projects being undertaken to ensure that the next-generation productivity applications leverage best practices in the connected factory. 

The American Manufacturing Summit is a leadership focused meeting designed around improving plant floor operations and manufacturing strategy across the globe. The Manufacturing Summit serves as an annual platform to exchange ideas around the impact of market dynamics and new technologies for current and future manufacturing, operations and supply chain leaders. This year's Summit created an opportunity to examine key case studies around how workforce management, lean manufacturing, process improvement and automation were being rolled out in the world's best facilities. In-depth discussions helped the attendees to build their road-map in achieving innovation, maximizing manufacturing profitability, optimizing plant floor operations and establishing standardization across multiple manufacturing facilities.

Women Who Code: Chennai Network Launch Event

The Chennai Chapter of Women Who Code organized a Network Launch event on 13th February 2016 at ThoughtWorks Technologies (India) Pvt Ltd. Neha Kaura, our intern, organized the event which saw an audience of around 70 participants. The attendees also included students from colleges like St. Joseph's College of Engineering and RMK College of Engineering.

Earlier, Women Who Code Chennai was launched at SRM University. Through this event, the chapter wanted to approach the women working in the IT sector of Chennai. Being the first city-wide event of Women Who Code, the agenda was to give an introduction to the latest technologies and practices in the IT sector. Developers gave talks on trends in Quality Assurance, Agile Development Methodology and also proposed an innovative IOT project.

A panel comprising of women from large corporate organizations and startups discussed about their experience in the IT sector. Nivetha from our team was also a member of the panel and described at length her journey of transitioning to code development from an electronics background. Nivetha said, “I gave an overview of the advantages of working in a startup where one gets to wear different hats. Self-learning, ownership and being a part of a lean team are some key takeaways. I also gave suggestions to the college students on building their career by learning what interests them and not just follow the curriculum.”

The chapter also plans to provide students and professionals with opportunities to learn the latest technologies being used in the field to hone their skills. Neha said, “The main motive of the organization is to connect those who are beginning their careers with more experienced professionals to develop a platform for mentoring. The event was quite successful in this aspect as many of the participants came forward to discuss what they wanted to learn and were eager to know details regarding the upcoming events."

To know more about upcoming events, signup here:

Raftar Formula Racing at Formula Student India 2016

Raftar Formula Racing is the official Formula Student racing team of IIT Madras representing the institute in Formula Student. We are one of the sponsors of Team Raftar. The team showcased its best performance till date and obtained 3rd position out of 60 teams. We are very proud of what Team Raftar has accomplished!

Raftar Formula Racing had an excellent result at FSI 2016 despite going through tough circumstances shortly before and during the competition. The team performed very competitively in the dynamic events and generated a large haul of points from the same. The team placed 1st in Fuel Efficiency and 2nd in Endurance (22 km run) which they completed for the first time. The reliability of the car was commendable as there were minimal issues during testing. 

Raftar Formula Racing and RFR16 gained popularity in the last few months through numerous Local and National news agencies and Media Networks. The car was also displayed at CFI Open House, the Annual exhibition of projects from Centre for Innovation, IIT Madras, with a footfall of around 1000, predominantly comprising of students from various institutes, Professors and Alumni.

The team is already working on the next year’s radically new car RFR17 for Formula Student India 2017. The team has learnt a lot from the Formula Student India experience, and is motivated to enhance the performance of the car. 

"We are glad to be associated with you and we thank you once again for the continued support from System Insights. We look forward to taking this project much further and exploring new frontiers of innovation with your support.", says Mohit Patil, Team Leader for Raftar.