GitHub system based on AI In around 40 percent of cases, Copilot writes insecure code

 The programming assistant code is buggy at best and at worst subject to attack.


The GitHub Copilot artificial intelligence system has been tested by scientists of the Tandon School of Engineering in New York University, finding the code generated by its programming assistant to be at best defects and potentially at worst vulnerable to attacks around 40 percent of the time.


The researchers created 89 code development scripts for copilot, which led to 1,692 programmes, based on an Empirical Cybersecurity Assessment of the GitHub Copilot Code Contributions. Approximately 40 percent of solutions have faults or design faults which an attacker could use.


A Microsoft Visual Studio code extension can be used with Copilot for private beta testing. The system enables developers to describe the features in a comment line to generate code that matches the description. Copilot can also guess what the developer will write next using variable and function names and other cues.


The researchers examined three distinct aspects of the results of Copilot: how much code is generated that displays a list of the 25 most common vulnerable; how various suggestions are likely to generate SQL injection vulnerabilities; and how code suggestions for less popular languages are handled (like Verilog).


According to experts, in several cases, Copilot created the programme directly from the command line using malloc (pointers) without checking to have NULL, embedded credential code, the untrustworthy user input code, and the code that displays over the last four digits of the US Social Security number.


0 Comments