Why Use TeamMentor

Featured

Recently a question came up about the benefits of TeamMentor. Specifically, what is the typical scenario of people using TeamMentor.

The idea is that people might know about security controls but not how to implement them, then they go to TM and find out how to implement the controls. For example, Company finds out they have a bunch of SQLi and XSS in their web sites, but they don’t know what controls actually prevent those vulns. So, they do what happens very often, which is they go and add some specific filters or something to that specific vulnerable piece of code and don’t change the architecture at all. Overall, their security posture doesn’t really improve and the developers don’t learn from their mistakes. The same types of vulnerabilities continue to haunt them. Enter TM.

Someone finds out they have XSS. They go to TM and quickly find XSS in the views in the OWASP folder, in the CWE library, and now in the Top Vulns library as well. There, they can read about industry standard ways to handle XSS. The amount of articles per subject is around a dozen and they’re pretty simple articles. Pretty much in one hour, they have the information to handle XSS vulns. TeamMentor can help fix discovered vulnerabilities.

But what if someone doesn’t want to have XSS vulns in the first place. Then, they can go and read about security controls for it and implement the ones that are relevant to their application(s). The result is that the amount of vulnerabilities is reduced overall and the application is hardened against exploitation. As an added bonus, guidance for standard compliance is included. The standard compliance part is still a work in progress, but most standards are based on the same principles and these are the principles described in TM. The language is technical but simple and is chosen to bridge the gap between developers and their employers/clients/managers. The libraries include OWASP Top 10 and CWE Top 25 vulnerability indexes, so even if someone doesn’t know what kinds of vulnerabilities are being exploited out there, they can still choose a logical set of controls for their application.

The bottom line is that a short session with TeamMentor can help prevent expensive and dangerous vulnerabilities before they happen.

Brutus Password Cracker

We reference Brutus in some of our articles, but the original site is down, so I mirrored it.

Yes, Brutus is really old. No, it’s not the best password cracker. It’s used as a classic example of a password guessing attack and it works fine for that.

The file was downloaded from http://www.darknet.org.uk/2006/09/brutus-password-cracker-download-brutus-aet2zip-aet2/ and scanned using https://www.virustotal.com/en/file/49a3e574080a63b1a24980b3a775a82b5a9f7c269318662f5bbebcf21f8cefe4/analysis/. I mirrored it in case darknet.org.uk also goes down. Yes, the file is detected by anti-viruses as a “hacking tool”. Don’t use it on production machines or anything sensitive, use at your own risk. It’s just an example of a password guessing attack tool.

The download URL is http://sergelab.net/tools/brutus.zip. The password is darknet123.

HTML5 Attack Vectors

http://html5sec.org

The HTML5Sec site is a nice collection of JavaScript snippets that abuse various HTML5 features. It doesn’t look like there is clear guidance on preventing this abuse available there, neither do these snippets exploit any specific vulnerabilities, so the practical application of this information is limited. The sheer amount of information and the appealing manner that it is presented in is impressive, however.

What are Application Security Best Practices?

Application security best practices are techniques that effectively increase information assurance by removing root causes of vulnerabilities or adding defense in depth. Vulnerabilities are often caused by programming errors; detecting and fixing these errors removes such vulnerabilities. Fewer vulnerabilities means better information assurance (information assurance is a technically more accurate term for what people might call cybersecurity). In addition to removing root causes of vulnerabilites, information assurance can be increased by adding defense in depth. Defense in depth means adding layers of defenses so that if an attacker breaks into the system, he only gets limited access and therefore can do less damage.

One example of a best practice is separating data and queries when executing SQL queries. A common programming mistake is to concatenate data into SQL queries and then execute them – this programming mistake is the cause of ALL SQL injection vulnerabilities (besides maybe some kind of hypothetical academic situation). SQL injection happens when the attacker is able to add malicious logic to the data and this logic is executed by the database driver as a part of the query. Because the data and the query are concatenated and passed as a single item, the database driver cannot tell the difference between the legitimate query and the attacker’s injected SQL. The database then executes the malicious SQL, typically giving unauthorized access to the database to the attacker. The correct approach is to separate the data and the query and to pass them separately to the database driver. In that situation, the database knows to treat the legitimate SQL and any possible data differently; it doesn’t matter then if the attacker tries to put SQL in the data, because the database knows that it’s just data and not actually SQL. Therefore, separating data and SQL queries is an effective technique at preventing SQL injection vulnerabilities. Removing SQL injection vulnerabilities is a great advantage and there are no disadvantages that compromise the other aspects of information assurance, so this technique qualifies as a best practice. Applying this technique to new and existing code is usually very simple, which is an added bonus. This technique definitely exists and it is definitely effective, which means that at least one “best practice” exists.

Application security best practices work reliably regardless of the organization using them. Separating queries and data works with any SQL database drivers that provides APIs that can do that, which includes most (but not all) SQL database drivers. It doesn’t matter whether the application is deployed by a global financial institution on its production servers or by a college student on his home page, separating data and code works just as effectively in any scenario. It is of course important to make the distinctions of whether the application uses SQL at all or whether the database driver allows separating queries and code. These distinctions are trivial, especially for the team that develops the application.

Most vulnerabilities that are used by actual threat actors can be prevented by applying well-studied best practices. Only a subset of all possible vulnerability types are commonly exploited in the wild. The OWASP Top 10 project enumerates some commonly exploited vulnerability types and the corresponding best practices. Here are some of these vulnerability types and best practices at a glance:

A1 – Injection

  1. Use a safe API which avoids the use of the interpreter entirely or provides a parameterized interface (this is the SQLi example mentioned above).
  2. If a parameterized API is not available, you should carefully escape special characters
  3. Use positive or “white list” input validation.

A2 – Broken Authentication and Session Management

  1. Use a single set of strong authentication and session management controls.
  2. Make strong efforts to avoid XSS flaws

A3 – Cross-Site Scripting (XSS)

  1. Properly escape all untrusted data based on the HTML context.
  2. Use positive or “white list” input validation.
  3. For rich content, consider auto-sanitization libraries.
  4. Consider Content Security Policy (CSP) to defend against XSS across your entire site.

Mapping best practices and documenting them is one of the core efforts of TeamMentor.

NSA Meta-data Collection

The difference between collecting meta-data about communications and monitoring the contents of the communications is that meta-data has more to do with mapping relationships between people, rather than watching what they are saying. The documents leaked by Snowden and his interviews suggest that NSA is collecting communication meta-data from a wide array of sources. Looking on Twitter, it seems that many people interpret this to mean that their personal communications have been compromised. That is correct, but the meta-data about relationships is often more valuable than the communications themselves and this detail appears to often be overlooked.

Obama Administration White Paper on NSA Bulk Collection of Telephony Metadata

“Under the telephony metadata collection program, telecommunications service providers, as required by court orders issued by the FISC, produce to the Government certain information about telephone calls, principally those made within the United States and between the United States and foreign countries. This information is limited to telephony metadata, which includes information about what telephone numbers were used to make and receive the calls, when the calls took place, and how long the calls lasted. Importantly, this information does not include any information about the content of those calls—the Government cannot, through this program, listen to or record any telephone conversations.

This telephony metadata is important to the Government because, by analyzing it, the Government can determine whether known or suspected terrorist operatives have been in contact with other persons who may be engaged in terrorist activities, including persons and activities within the United States. The program is carefully limited to this purpose: it is not lawful for anyone to query the bulk telephony metadata for any purpose other than counterterrorism, and Court-imposed rules strictly limit all such queries. The program includes internal oversight mechanisms to prevent misuse, as well as external reporting requirements to the FISC and Congress.”

The reason that relationship meta-data might be more valuable than the communication contents is that meta-data is fact, but communication content is frequently unreliable. If a person contacts another person repeatedly, this almost certainly means that there is a relationship between them. It is pretty simple to filter out “wrong number” types of contacts by using simple frequency analysis. Frequent communications between two individuals likely mean that there is a strong relationship between them. Infrequent communications mean that the relationship between them is either weak or non-existent. This type of information is usually factual – most people are unlikely to counterfeit this type of information efficiently, though this meta-data mining system is not completely fool-proof. By comparison, the contents of the messages many people send to each other are complete rubbish – look at http://textsfromlastnight.com/ for some examples. Collecting meta-data can be easily automated, but reading messages almost certanily requires human analysts. Human analysts are typically a lot more expensive than processing meta-data. Meta-data is more accurate and less expensive to process than communications themselves and both of these factors make meta-data frequently more valuable than communications themselves.

Collection of meta-data is different from inspecting the communications themselves. Meta-data is often more valuable than the communications.

Developing TeamMentor Content

Over the past couple of months we have developed a thorough guide for developing TeamMentor content. An earlier post on this blog describes roughly the process for writing technical content and is kind of similar, except it’s just a short article. The thorough guide has been developed to help new SMEs write TeamMentor content and is available to our customers.

My observations of the effectiveness of the TM4TM writing guide:
+ At least some people seem to not actually follow instructions. The people that don’t follow the instructions are often also the worst performing in terms of both article quantity and article quality.
+ Reading the guide does not seem to substantially increase the quantity of produced articles per week for a given SME.
+ The guide saves a lot of time for the team as a whole, because SMEs produce similar looking articles. Producing similar looking articles saves a lot of work on having to edit and format the articles to look the same. Uniform formatting is one of the requirements for developing a technical content library; a guide and a standard for writing uniform articles helps with the overall process.

TeamMentor Vulnerability Remediation

TeamMentor now supports an effective vulnerability remediation workflow. The TeamMentor 3.3 release adds support for integration with automated vulnerability scanners. Vulnerability scanner integration means that vulnerability scan reports include links to guidance that describes how to fix the discovered vulnerabilities. Scanners often come with limited instructions on how to actually fix the vulnerabilities that they find and this is the limitation that the latest TeamMentor release helps to overcome. The benefits of having clear and accurate instructions for fixing vulnerabilities are:

+ Vulnerabilities get fixed faster – developers don’t have to look for instructions about what to do.
+ Vulnerabilities get fixed more accurately – developers don’t have to choose between conflicting advice on the Internet.

First TM Collaborative Experience

I’ve used TeamMentor to mentor a team for the first time tonight hehe. The results are some observations and some issues filed in GitHub. This process should help improve TeamMentor usability.

The Mentoring:

Non-technical people get hands-on training in basic infosec skills and then take notes on their experience. The idea is to then convert these notes into TM articles. To support the activity, some bare-bones TM articles (stubs) are created in the beginning with links to tools, tutorials, etc. As people use the tools, they take notes. The notes should then go back into TM. This process should be similar to what would take place within an organization as they raise their infosec awareness level.

Tonight’s lesson was about removing Windows malware using Process Hacker and Autoruns. It’s a useful basic skill for people that use Windows and should be effective for detecting and eliminating >90% of malware.

Observations:

+ TeamMentor can certainly get the job of sharing InfoSec/AppSec information done.
+ The UI works okay for content collaboration, but there are a lot of minor annoyances.
+ TeamMentor can be used to manage system configuration standards. Managing system configuration standards is a common requirement and a web portal to manage them may have value

Writing Articles

To write a set of TM articles:

1. Define requirements. Figure out what you’re writing about and what you want to write exactly. Get a vision of what you would like your final product to be like, in terms of quality, quantity, depth and presentation. Be optimistic, but realistic. By being organized, you can probably increase your performance, but it will be still proportional to your current level of performance. The important thing is to have a vision of your end product in the state that you want to present it to your customer or audience. This is the vision that you will be turning into reality.

2. Visualize the content. Once you have a mental vision of your final product, you can start visualizing it with technology. There are many names for essentially the same information visualization techniques: mind mapping, concept mapping, infographics, etc. Create a visual representation of the structure of your final product. In this case, the product is a set of articles. In graph theory terms, this can be represented as a graph where each node/vertex is an article. Once you can make a chart/diagram/infographic of your future articles, you are pretty much done with this step. The following is for “advanced users”. Each article is also a graph, and each node in that graph is a point that is going to be covered by the article. Taking care of identifying the subjects covered by each article at this conceptual stage tends to save time later.

3. Make article stubs. Now make blank article stubs as files for all your planned articles. This helps be aware of how much work you have exactly and what needs doing. Having multiple articles in production at the same time helps switch between them, so when you are feeling burnt out on one subject, you can switch to another subject and keep going. If you have a lot of articles going on at the same time, you can just keep going through them in whatever sequence you want. So if you are feeling burnt out on one article, you switch to another, and then another, and so on, and by the time you get back to the first article, you don’t feel burnt out anymore and so you can keep going pretty much continuously while you have the energy.

4. Split article stubs into bundles. In most real-world workflows, there are dead-lines, so it helps to have deliverables on hand to keep releasing. To accomplish this, article stubs can be organized into bundles that can be released at approximately the same time. Splitting articles into groups also helps stay organized throughout the writing process. If you have a small set of articles, say under 40, you can probably make do without this step. A TeamMentor library usually has at least 150 articles in it, so splitting them into groups is pretty much essential to stay organized.

5. Fill article stubs with outlines. Go through the article stubs in the group that you are working on and get an idea for how hard each article is to write. One of the most time consuming parts is doing research. Another time consuming part is adding “exhibits”, like code examples or pictures. What I call “exhibit” here is basically if you are putting some media or some information product in your article that helps clarify your point. Producing the “exhibit” is often more work than writing the article itself. Even just `shopping an image can take longer than writing an article and that’s pretty much the quickest way to have something original besides plain-text. So, while you’re going through your group of article stubs, you get an idea of how much work each article is going to be. While you’re doing this, you can put your notes about what you want to put in the article inside the article stub. As you are putting in your notes inside the article stub, you can organized them sequentially to produce a structure for your article.

6. Write the articles. Ultimately, the articles have to be written. It’s recommended to start with the easiest/quickest articles, so that you have something to deliver as soon as possible. While you’re working on the easy ones, you can think about what you’re going to write for the hard ones. When you have the outline structures in your stubs that you should have from step 5, you can start writing the articles from any place within the article. So say you have an article with an introduction, three body paragraphs, and a conclusion. If you have an outline, you can write the second body paragraph, then the introduction, then switch to another article. When you return to this article again, you can write the first body paragraph and the conclusion, then switch to another article again. Then when you get back to this article, you write the last body paragraph and your article is done.

7. Update the content chart/diagram. As you’re writing your articles, you might find out things that you didn’t know before you started writing them. As a result of new information, you might want to update your plan slightly and the final product will be a little different than what you have originally visualized as a chart. Once you have most of your content done, you can go back to your chart and update it so that the chart reflects reality. This chart is going to be useful for other things now, so it helps if it reflects the actual content.

8. Cross-reference the articles using the chart/diagram. One thing you can use the chart for is to group the related articles together and link them to each other. This way the reader can explore the related articles after finding one of the set. This is one of the most common examples of scenarios that are enabled by having an accurate chart of your content. You can also plan a delivery schedule, share the chart with a customer to describe what content is there, share the chart with your team to show the current state of your content and where it might need improvement, etc. When time comes to update the content, you can use the chart to plan out your updates.

9. Review and edit. Apply formatting, style the text, run the spell-checker, and then have an editor review the articles.

10. Publish. Since we are talking about TeamMentor here, this means import the articles into TM. In the context of blogs, it means pressing the publish button, assigning categories, tags and doing SEO.