Recognized under UGC Section (2f) of the UGC Act 1956

AI and ML Algorithms for a better future

At a talk on ethical artificial intelligence, the speaker brought up a variation on the famous trolley problem, which outlines a philosophical choice between two undesirable outcomes. The speaker had put up a scenario that said that a self-driving car is traveling down a narrow alley with an elderly woman walking on one side and a small child on the other, and no way to thread between both without a fatality. Who should the car hit?

Then the speaker said: Let’s take a step back. Is this the question we should even be asking?

That’s when things clicked for Shivalik into developing students with such intelligence and sharp decision skills with such ethics to design such algorithms and technology in ML and AI that these things can be wiped out from the board of scenarios with new ones emerging up and the technologies can be more of life-giving than taking, after all the technologies we build re for our good and benefits that we do build and develop them and for our bad and losses that we might face like the one above in the scenario given up by the speaker.

Shivalik works on giving such an environment and exposure to its students that they can relate to and solve these problems as their own and also look for other possible outcomes that, may arise and cause another fatal scenario in this same situation but differently.

Then instead of considering the point of impact, a self-driving car could have avoided choosing between two bad outcomes by making a decision earlier the speaker pointed out that, when entering the alley, the car could have determined that the space was narrow and slowed to a speed that would keep everyone safe and this situation might never have come up but it eventually did paving the way for the suspicious thought to come up in the mind that there is something wrong or incomplete and that it needs correction and also better algorithms with more details and a better set of detailed commands in its console to read for every possible situation that may arise and also updating it with time to maintain and improve the experience and the outcomes in an emergency that can be controlled up to a greater extent.

Recognizing that today’s AI safety approaches often resemble the trolley problem, focusing on downstream regulation such as liability after someone is left with no good choices has made Shivalik a place where the atmosphere is like to design, innovate and improve the current technologies with an attempt to revolutionize the technological world as we know it. “Engineering systems are not divorced from the social systems on which they intervene,” so is the sole belief of the institute that is approved by the AICTE and is recognized by the Uttarakhand Government and affiliated to the Veer Madho Singh Bhandari Uttarakhand Technical University. Ignoring such facts of safety and other emergency protocols whilst designing such independent AI systems can risk millions of lives of buyers that this system is not that safe and accident safe, but then nothing is but the point here is to improve the chances to avert any such arising situations and to as minimal as possible in the area of risk-free automatic driving.

For such a technical enthusiast Shivalik has special scholarship programs that it offers to its upcoming students with such bright minds and world-changing ideas and the ability to think that they actually can Shivalik helps in designing an auditing procedure for any project that the students work on or have applied.

The students are made to learn and develop the habit of being curious about things that pique their interest and then pursue their specialization courses in the field of their choice with the help of providing them with the right platforms and where they also have to work around tricky trade secrets in order to get their specialization courses done in time and then apply for their dream companies or open their own if they wish to, which can, in turn, help them learn and know the right legal way to prevent them from getting a close look at the very algorithm that they are auditing because these algorithms are legally protected and without getting into any legal matters that may harm their project or their work.