With software becoming evermore present in our private and professional lives it is probably not surprising that software applications, with all their weaknesses, have also become increasingly interesting to hackers. There exist many different ways to hack applications and crack accounts, e.g. eavesdropping or sniffing network traffic, social engineering and phishing, brute force attacks, or using built in backdoors, to name just a few.

Various efforts have been undertaken to reduce the risk for users to fall prey to such attacks. However, a lot of those efforts focus on changing user behaviour and fixing the (visible or hidden) interfaces that a specific technology offers. Put another way, security risks in software applications are often dealt with retrospectively by fixing a breached entrance point. – But what if we started looking for weak spots at a much earlier stage: the respective engineers and their tools. Does the choice of tools, like for example what kind of programming language to use, influence the security of a certain technology? Most certainly yes.

Available programming languages have been designed with different goals in mind, come with different strengths and weaknesses, but they haven't always been developed with security in mind

C has been designed to favor performance, affordability, and ease of implementation over safety. The C family of languages has inherited those tradeoffs. Those were deliberate choices and they still make a lot of sense in many circumstances today. Our OSs run fast thanks to them.

WWDC2017, Session 407 – Understanding Undefined Behavior

– with Ada Spark (a language that is used in safety-critical / high-security domains) seemingly representing a rare counter-example.

Current programming languages also give their respective users different degrees of freedom in how to use them.

The C programming language is terrible. […] I mean 'terrible' in the 'awe-inspiring dread' sense more than the 'bad' sense. C has become a monster. It gives its users far too much artillery with which to shoot their feet off. Copious experience has taught us all, the hard way, that it is very difficult, verging on 'basically impossible,' to write extensive amounts of C code that is not riddled with security holes.

Evans 2017, TechCrunch

The problem here seems to be that using a language like C requires the user to know a lot about it, about undocumented assumptions and about possible side effects and to always keep them in mind. A lot of our current development languages do not help much (in the way of) enforcing good security practices or avoiding known security issues.

So why do developers keep making the same mistakes? Some errors are caused by legacy code, others by programmers’ carelessness or lack of awareness about security concerns. However, the root problem is that while security vulnerabilities, such as buffer overflows, are well understood, the techniques for avoiding them are not codified into the development process. Even conscientious programmers can overlook security issues, especially those that rely on undocumented assumptions about procedures and data types.

Evans and Larochelle 2002, IEEE SOFTWARE, p.42f

While it is impossible, and also undesirable, to codify all knowledge a programmer has to have about a language into the language itself, do we maybe need safer programming languages that protect their users from making choices that could have unintended consequences and lead to insecure products? The designers of the Swift language seemed to think so.

Now, safety is a design choice in SWF [Swift]. […] While you can write code fine-tuned for performance in SWF, this language makes different tradeoffs and was designed to be much safer by default.

WWDC2017, Session 407 – Understanding Undefined Behavior

How is Swift safer by design? Let me give you three examples: overflows, switches and optionals.


Using overflows for security exploits is a very popular technique among hackers. Buffer overflows happen when a programme tries to put unexpected, malformed data into a buffer in memory that was initially created for a smaller amount of data. Writing past the end of the allocated buffer can overwrite adjacent data, which can in turn lead to crashes or corrupt data, or – when writing into areas that contain executable code – to the execution of malicious code. The Heartbleed bug comes to mind (even though it technically isn't a ‘classic’ buffer overflow exploit, where data in memory is overwritten with the intent to execute that code; it rather is about reading more data from memory than originally intended); more examples can for example be found in the vuldb.com or the Common Vulnerabilities and Exposures database.

By default Swift is designed to be memory safe. Swift can bridge Objective-C and many C types, though, and provides the possibility to call C and Objective C functions. The C language is especially prone to overflows. So, applications written in Swift can potentially suffer from the same vulnerabilities.

Buffer overflows, both on the stack and on the heap, are a major source of security vulnerabilities in C, Objective-C, and C++ code. [...] Because many programs link to C libraries, vulnerabilities in standard libraries can cause vulnerabilities even in programs written in 'safe' languages.

Apple Secure Coding Guide

Swift works hard to make interaction with C pointers convenient, because of their pervasiveness within Cocoa, while providing some level of safety. However, interaction with C pointers is inherently unsafe compared to your other Swift code, so care must be taken.

Apple Swift Blog, Interacting with C Pointers

When using 'pure' Swift, over- and underflows cause a runtime trap. This means that your application will rather crash than continue running in an unexpected state.

Swift is designed to prioritise safety and minimise these errors. Swift’s default behavior of trapping on overflow calculations may come as a surprise to you if you have programmed in another language. [...] The philosophy of the Swift language is that it is better to trap (even though this may result in a program crashing) than potentially have a security hole.

Swift Programming: The Big Nerd Ranch Guide, p.28

This is an example of an integer overflow in Swift:

As y is an Int8, 10 is also assumed to be an Int8; but as Int8 can only hold values from -128 to 127, the expected result (130) does not fit in there, which crashes the programme. A lot of other languages than Swift would not trap here. They would instead wrap-around so that the (unexpected) result of above's calculation would have been -126.

A malicious user could exploit code with wrap-around behaviour by entering for example a negative number when a positive one is required. If the applications expects an unsigned number but gets a negative number, that number might be interpreted as a very larger number (because of the before mentioned over-/underflow wrapping behaviour). Allocating buffer for the unexpectedly large number could then potentially lead to an overflow.

Wrap-around behavior could also be achieved in Swift by using overflow operators; these operators need to be intentionally applied, though and aren't the default.


Switch-case-statements are one possible tool (among others) to control code execution by structuring it. Programmers can specify different blocks of code that will be executed under certain conditions. Whenever a certain value is found to match that of a case, the block of code in that case statement will be executed. Usually a fallthrough from one case to another is the default, even though a break is most often the desired option. Forgetting to put a break at the end of a case can therefore be another common source of security issues.

Not being a 'real' switch-case-bug example but also touching on the overall concept of flow control, I want to mention the GoTo-Fail here. A lot of the blame in this incident can be attributed to wrong indentation, and probably also missing brackets, in the if-statements. Let me show you a scary but totally possible example of a switch-case in Objective-C:

switch (foo) {
    case 0:
    case 1:
        NSLog(@"more stuff");
    case 2: {
        NSLog(@"different stuff");
    case 3: {
        NSLog(@"other stuff");
    } break; {
    case 4:
        NSLog(@"stuffy stuff");
    } break;
    case 6: NSLog(@"stuff stuff");
    case 8: NSLog(@"and stuff"); break;

If-statements, switches, and other control flow tools should be as clear as possible, so that the flow of code can immediately be discerned. They should also be easily implemented, e.g. it should be really hard to forget a bracket or a break at the correct place. Swift is doing that with switch-case-statements. You don't have to explicitly put a break at the end of a case anymore because Swift made breaking the default.

switch statusCode {
    case 400:
        errorString = “Text for error 400”
    case 401:
        errorString = “Text for error 401”
    case 403:
        errorString = “Text for error 403”
    case 404:
        errorString = “Text for error 404”
        errorString = “Default error text”

[Languages like C or Objective-C] require a break control transfer statement at the end of the case’s code to break out of the switch. Swift’s switch works in the opposite manner. If you match on a case, then the case executes its code and the switch stops running.

Swift Programming: The Big Nerd Ranch Guide, p.38

Fallthroughs are still possible in Swift, but they have to be explicitly added, which makes it less likely that they happen by accident.

switch statusCode {
    case 400, 401, 403, 404:
        errorString = “There was something wrong with the request.”
        errorString += “ Please review the request and try again.”

Null pointer

Accessing a null pointer is another really common source of software bugs that can be exploited (see for example CVE-2016-6604 or VULDB-ID-96306). A null-pointer-dereference occurs when a pointer with a value of NULL is used as though it pointed to a valid memory area. Tony Hoare, a British computer scientist and the inventor of null, apologised a couple of years ago at a conference for having introduced null references back in the 60s:

I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.

Hoare 2009, InfoQueue conference

When using C, trying to access whatever such a null pointer points to leads to (usually unwanted) undefined behavior. (Objective-C works a little bit differently. It largely avoids null pointer exceptions by simply ignoring them. As it is built on the C language foundation, though, it also knows some cases where null pointers cannot be ignored, like for example adding them to an Array or a Dictionary) So, to be able to correctly deal with null, programmers have to insert null checks into their code. The problem is, that this potentially has to be done with every reference, as there is always the possibility that it might be null. (Again, Objective-C functions a little bit differently; it lets you add nonnull annotations. This was added to the language retrospectively, though; see Xcode 6.3 release notes.)

In Swift such a null check would look something like this:

if message != nil {

When using Swift, programmers do not have to do such checks all the time because Swift uses Optional types to deal with the concept of nothingness. The Optional type indicates whether a constant or variable can at any point equal nil, or not. It can be used equally (and consistently) with any variable type. If a variable is not an optional it can be assumed to never be nil. Non-optionals can always be used safely. Optional variables, the ones that can be nil, are marked with a ? to show that they are of a different type. This immediately indicates if a nil check is necessary, or not. To use an optional it has to be unwrapped. Unwrapping can either be done safely by using if let or guard let ... else, or unsafely, by adding a ! to the variable; this is also called force or implicit unwrapping.

When you force-unwrap an optional, you expect to get a value, and the rest of your code is written assuming there is a value to work with. If the optional is nil, there is no value. The only reasonable thing Swift can do is immediately stop the program. If your program did continue, either it would crash when it tried to access a nonexistent value or, worse, it could continue to run but produce incorrect results. (Both of these possibilities come up in less-safe languages like C.)

Swift Programming: The Big Nerd Ranch Guide, p.238

Safe unwrapping example:

func displayErrorMessage(_ optionalMessage: String?) {
	if let message = optionalMessage {

Calling displayErrorMessage() with a String as parameter prints out that String; calling it with a nil value does not do anything.


The examples discussed illustrate how Swift tries to be a safer language than previous ones by 'tricking' developers into using the more secure option, making it the default one. The possibility to circumvent the safeties, which Swift imposes on its developers, exists in all three examples, though. Swift gives its developers ways to override the safeties if they want to, but forces them to make a conscious decision about that.

This can, in the long run, help to make software more secure. Of course, we cannot catch all (and especially new) bugs with that approach, but by using safer languages we can at least develop software that is less prone to bugs that are well known, like the ones discussed above. And this does not only apply to new developers who are still learning; even experienced who know about potential problems and pitfalls can't always catch all the errors, which could cause security issues later on.

Developing safer languages, the community should however keep in mind that the rules, which are baked into the language, should be there to help developers do their job. If languages are getting too strict, programmers might tend to override the safeties and make everything worse. Making languages secure therefore not only involves technical aspects - actual usability of the language influences security of software just as much.