- GitHub wants to update its policy on dealing with potentially malicious code, and the community isn’t happy about it.
- Many find that it’s practically impossible to police uploads without hindering offensive security research.
- The code hosting platform had to deal with a very risky PoC recently, so this is why they are taking broader action now.
GitHub feels that the time to introduce a stricter policy that deals with the presence of malware on the platform has come and is calling the community to share its thoughts. Mere hours after the relevant announcement came out, numerous security researchers have expressed their concerns, as adding restrictions means they could lose valuable stuff like PoCs that help plug holes, IoCs that help protect systems, offensive security tools that aid in the discovery of flaws, exploit iterations, and all the “goodies” that make their valuable work possible.
GitHub has always had similar problems and reservations, but the incident that seems to have been the straw that broke the camel’s back was an upload of a working proof of concept exploit for Microsoft Exchange flaws, long before the vast majority of the vulnerable systems had applied the fixing patches.
As this was an exploit against Microsoft products, and with GitHub being a Microsoft-owned platform, removing the particular PoC from it didn’t resonate well with the community, even if many found the publication unethical in the first place. As such, GitHub figured it needs a standard system to evaluate all uploads instead of having to justify what's objectively acceptable with different circumstances underpinning the incident every time.
In the relevant announcement, GitHub states that fostering security research remains high in its list of priorities, but they need to figure out a way to keep things going while respecting the specific needs of security professionals. But as numerous researchers pointed out in the relevant discussion thread, this is going to be very complicated to do. Frankly, most of them would prefer that GitHub keeps things mostly unregulated, allowing the freedom to upload and share potentially harmful code, even if this admittedly comes with some risk of abuse.
What comes out from this discussion is that GitHub’s team sincerely wants to hear the community feedback, but whether or not anyone’s opinion will play a role in the decisions taken in the end remains to be seen. Already, many in the security research field have been disappointed by this development, and there have been quite a few posts discussing viable alternatives.
So, the cogs are now turning, and we will soon know how deep and wide the impact of GitHub’s new policies is going to be. If they are too disruptive for security research, then the affected community will just move elsewhere, maybe even set up something new. For sure, security research isn’t going to stop, but a ground-shifting move from GitHub could bring a serious disruption.