All important things related to Codevita
Hello friends!!!
In this post, I plan to tell you a little about things we have been doing behind the scenes. Automatic Code Evaluation is a complex task. To make it scalable, reliable and perform-ant is even more difficult. But, we have been diligently doing the same so that you can have a good experience in participating in CodeVita - The TCS Coding Contest.
For the uninitiated, a coding platform usually comprises of an engine which compiles the user submitted code and runs a battery of test cases against it before it can figure out if the submission is to be accepted or rejected. To isolate users from each other, there is usually some sand-boxing that the engine does. All these activities - compilation, execution, sand-boxing etc. are resource intensive. For 2013 season we had ~70K Submissions in the first round spread over a 24-hour window. That's roughly 4 submissions every 5 seconds. Rounded off, to compile and execute 1 submission in 1 sec is a pretty stiff challenge. C / C++ take roughly about 400 milliseconds to compile. Portable languages like Java and C# take about 800 milliseconds to compile. Perl, Python and Ruby are interpreted languages. Compilation is done once for a given submission. If submission has compilation errors, the process terminates. If the submitted code compiles, then the platform executes test cases against the code. Test case execution time depends on the complexity of the test case. However, for illustrative purposes, it is fair to assume 300 - 500 milliseconds per test case. Different questions have different number of test cases.
Another aspect is that the submitted code is written by the participant. It is very likely that these submissions would run longer. The longer a submission sits inside the sand-box, the more challenging it will keep getting for the engine to manage a rapid rate of submission. Hence it is easy to see why 1 submission / sec is a stiff task. Managing this complexity is the secret sauce behind success of the platform.
This is just the performance aspect. Another issue is to manage the security of the application. The platform also has to guard itself against malicious code. Obviously, this post will not talk about how the platform safeguards itself against attacks and malicious code.
Last year, there were two non-critical functionalities in the web application viz. Reports and Notification. Reports used to tell the teams where they stand vis-a-vis the average scores, in an abstract manner. Notification used to be a Facebook like popup with message regarding some team solving a problem you are currently working on. Both functionalities put additional load on the servers, hence they have been removed this time. This time the CodeVita webapp will be a simple application with only one critical functionality - viz. File Upload for Evaluations. All the submitted code and their corresponding evaluation status along with their time of submissions is always available for viewing through through My Submissions functionality.
Testing of the platform has been equally challenging. If you read, you know that the platform provides, primarily 7 kinds of messages. This is true for all languages. Hence the Test Matrix consisted of 7x7 i.e. 49 combinations thrown randomly at the system. The platform's ability to evaluate them correctly and the amount of time it requires to do so are treated as important test metrics.
Plenty of thinking and iterative development has gone into building the platform. So if you believe that your solution is correct but the system does not accept it, think carefully again. Quite likely, instead of the fault at the server end, it is more possible that the fault will lie in your submission. In case, there is a mis-configuration of say test cases on the server-side, due to human error, it is possible that your code is correct and the platform is evaluating it incorrectly. In that case, nobody will get the solution in "Accepted" state. But we will do our best to not make such silly mistakes at our end and ensure that such situation never arises. Also, should this ever happen, we have mechanism to correct a faulty test case and redo the evaluations post the contest. If your code is correct, re-evaluation will detect it and grant you the scores you are entitled to. So we will try to neutralize the disadvantages that came your way.
I hope I have given you a pulse of how the platform is engineered. Hope this gives you an idea of what system you are dealing with in the contest.
Note:- Questions / comments most welcome.
Note:- Questions / comments most welcome.
!!! Cheers and Happy Learning !!!
!!! Thank you !!!
!!! Thank you !!!
Comments
Post a Comment
Do you wanna ask me some thing???