Posts in Programming

8 Resources That Will Make You Better At Programming

February 21, 2019 Posted by Programming 0 thoughts on “8 Resources That Will Make You Better At Programming”

You can never have too many resources when it comes to honing your craft. These resources for beginner and intermediate programmers will inevitably make you a valuable programmer. The skills taught in these resources involve writing code that scales. In other words, your code will be simple, readable, and effective.

Effective Programming

Gatson’s notes on Jeff Atwood’s Effective Programmer gives you a nice TL;DR about what being a good programmer is all about. One of his notes says, “Don’t become a programmer only because someone tells you to. Become a programmer because you want to solve problems, you want to write your own rules, and you enjoy doing it.”

Variable Names

These slides focus on naming conventions for Go, but the principle of writing good code applies for many languages. Properly naming variables is one way to write code that your co-workers will appreciate.

API Design

In the age of microservices, it’s not enough to build something that serves a single purpose. The code must be modular so that it can properly serve the wave of IoT devices that have flooded the market. This post reminds you that, as a programmer, you’re an API designer.

Simple Code Is Better Code

In this tech talk, Kate Gregory shows you why simple code is better code. Clever solutions may make you feel like a wizard for a brief moment, but all of that wizardry may come at the expense of a debugging process that would require the same feat of wizardry to resolve.

Line Of Sight In Code

This article uses GO as an example while showing developers of all stripes how to write code that is readable to humans. The “happy path” in control flow should be written in a way that allows users to see the expected execution flow. Other topics include tips concerning exiting early from control flow and so on.

Funcional Options For Friendly APIs

Dave Cheny shows you how to configure APIs that can scale by taking advantage of what he calls functional options. This involves simple use cases, readable parameters, and direct control over the initialization of complex values.

Don’t Just Check Errors, Handle Them Gracefully

You’re bound to get errors. It’s just part of the life of a programmer. Dave Cheny shows you how to handle these errors more gracefully by segregating error behaviors from type, annotating errors, and optimizing error handling.

Learnable Programming

In this essay, Brett Victor walks you down a path towards learning how to program the right way. Rather than look up coding examples on Stack Overflow, the aspiring programmer should learn how and why certain functions are used. The essay calls for an approach to teaching that mirrors how culinary students learn to cook.

 

Please follow and like us:
0

Compiled Programming Languages vs. Interpreted Programming Languages

February 21, 2019 Posted by Programming 0 thoughts on “Compiled Programming Languages vs. Interpreted Programming Languages”

I read a book titled Liar’s Poker some time back. It was written by Michael Lewis and it chronicled his stint at Salomon Brothers. One of the terms used to characterize traders who raked in money for the company was Big Swinging You know What. I added the You Know What, but you get the idea. The term was meant to describe the sort of pride that haloed a successful trader.

The point is that programmers–both women and men–can sometimes wear their preferred language like a halo, looking down on the other less endowed. One of my favorite lines about this hierarchy within the field of programming comes from Steve Yegge, a satirical blogger/rant artist/ software engineer.  Here’s his description of what he coins the DAG of Disdain:

“At Google, most engineers are too snooty to do mobile or web programming. ‘I don’t do frontend’, they proclaim with maximal snootiness.

There’s a phenomenon there that I like to call the ‘DAG of Disdain’, wherein DAG means Directed Acyclic Graph, which is a bit like a flowchart.

At the top of Snoot Mountain sit the lofty Search engineers writing in C++, which is considered cooler than Java, which is cooler than Python, which is cooler than JavaScript.

And Search is cooler than Ads, which is cooler than Apps, which is cooler than Tools, which is cooler than Frontends. And so on. Programmers love to look down on each other. And if you’re unlucky enough to be a Google mobile engineer, you’re stuck scuffling around at the bottom of several totem poles with everyone looking down on you.”

Of course this is at Google, which isn’t exactly known for its robust UI. It’s known for its powerhouse of a search engine with marketers talking about its new algorithms like they talk about restaurant specials. But just like the bond traders at Solomon, the demand for C++ programmers is high, which drives up their value, and, for some, their egos.

They live in their paradisaical niche of compiled programming languages lording it over those who share their realm and batting a slanted eye at that other realm: the interpreted languages and the jank often associated with them.

To speed demons of the ilk of C++ programmers, a JavaScript developer is racing on a tricycle.  Or, better yet, a 2001 Jaguar X-Type badly in need of a tune up. Eventually, JavaScript developers will have to visit the local C++ tune up shop if they ever want their app to scale.

So, when it comes down to it, what is better? A compiled programming language? Or an interpreted programming language? A few developers actually have a friendly discussion about this matter. Here are their responses.

 

On The Fence

 

Rob Hoelz

At the risk of being pedantic, there’s no reason you can’t have both a compiled and interpreted implementation of a language – for example, Haskell has GHC, which compiles down to native code, and the lapsed Hugs implementation, which is an interpreted implementation of Haskell. Scheme and ML are other languages which have both interpreted and compiled implementations.

Then you get into interesting territory with languages like Java, which are compiled to bytecode, and the bytecode is interpreted (and possibly just-in-time compiled) at runtime. A lot of interpreted languages – Python, Ruby, Lua – actually compile to bytecode and execute that when you run a script.

Performance is a big factor when it comes to interpreted vs compiled – the rule of thumb is that compiled is faster than interpreted, but there are fancy interpreted systems which will generate faster code (I think some commercial Smalltalk implementations do this).

One nice thing about compiling down to native code is that you can ship binaries without needing to deploy a runtime; this is one of Go’s strongest features, in my opinion!

Dustin King

Ideally, it would be like Common Lisp in that it has interpretation and compilation both built in from the start. ANSI Common Lisp described it as having “the whole language there all the time”: you can compile code at runtime, or run code at compile time (which allows for lots of metaprogramming).

When you want fast iterative development, you want interpreted code. In production (and especially on resource-constrained devices), you want code to run as fast and memory-efficiently as possible, so you want it compiled. The ideal language would make it effortless to transition between the two as needed.

Derek Kuhnert

It all depends on the intended purpose.

Compiled languages:

  • Are faster at runtime
  • Conceal source code
  • Have associated compile time
  • Are better when you’re not making frequent changes to the code, and care a lot about runtime speed

Interpreted languages:

  • Are slower at runtime
  • Have open source code, but that code can be obfuscated (minification, uglification, etc)
  • Don’t have to compile before use, but can have an initial parse time that’s typically much faster than compile time
  • Are better when you are making frequent changes to the code, and don’t care as much about runtime speed

There are also factors regarding whether you need additional software to be able to run the code, but languages like Java (compiled but still needs the JVM) kinda muddy the waters on this.

For Interpreted Languages

 

Ben Halpbern, in response to Rob Hoelz:

I don’t think this is pedantic, I feel like it’s a great evaluation of the whole question.

I personally take the give and take of each scenario and don’t draw the line for my own uses.

In my life these days, I’ve been writing Ruby and JS for the most part but a bit more Swift for iOS lately. I don’t like that I have to wait to compile and run when I write native in Swift, but I accept it as part of the world I’m in when I work with this tool.

I hope in the long run that good work keeps going into making interpreted code more performant and compiled code easier to work with.

 

rhymes

I don’t like that I have to wait to compile and run when I write native in Swift

That’s one of the “selling” points for Go, the fast compilation times, in my experience it invalidates this:

compiling

I’m sure Swift compilation phase is also “slowed” from the enourmous amount of stuff you have to compile for a mobile app to function 😀

 

For Compiled Languages

 

Casey Brooks

I am a huge Java and Kotlin (compiled languages) fanboy, and really don’t care much for Groovy, Javascript, Ruby, or Python (all interpreted languages). One of the main reasons why I like compiled languages is because it gives me a sense of safety in refactoring that you don’t get with interpreted languages.

For example, if I change a variable name and forget to update the code that used that variable, a compiled language will fail to compile, and I am forced to fix it everywhere.

In an interpreted language, even one that is preprocessed using something like Webpack, you can’t know for sure that you’ve renamed the variable everywhere until you start getting errors at runtime.

You can mitigate it with static analyzers, but that’s just an extra thing you have to get set up, which comes for free with compiled languages.

 

Yaser Al-Najjar 

Compiled languages just win on everything (performance, reliability,

But, most famous enterprise backend langs (Python, C#, Java… etc) are using both of the best worlds… they get compiled into some special binary format, then an interpreter on any platform (android, ios, win, linux) gets that special code and execute it.

All that to achieve super cross-platform, and they do work on virtually all devices.

Ah, JS is an exception 😀

[continuation]

C / C++ are just faster than any other lang (given we use the latest compiler optimizations and the right implementation of the software)… Perl interpreter itself is implemented in C & C++.

All the people here agree on this point: news.ycombinator.com/item?id=8626131

Ditto for reliability. I can’t see C being more reliable than Haskell, for instance.

From my experience, run-time errors occur more in interpreted languages than compiled languages. I do Python and C#, and I face lots of run-time errors in Python than C#.

 

Paul Lefebvre

Interpreted languages can often give you more instant feedback when coding. But they also typically have slower code and you might have to ship your code in source (or obfuscated) form with the run-time interpreter which is often less than ideal.

Then there are intermediate languages that are compiled a bit, but still use a run-time. Java and C# come to mind here.

Then there are fully compiled languages. Compiling your code to native machine code is nice for source security. Performance is often better as well, but the compile process can take time.

I guess I would prefer a fast, compiled language these days.

 

Zuodian Hu

A lot of people are arguing about performance.

Assuming good implementation:

Compiled languages are faster in general because they don’t need to run through any intermediate interpreter. Instead of finding out what needs to be done and then doing it, at run time the program just does what needs to be done.

In an interpreted language, the program needs to first figure out what needs to be done (be interpreted), then it can go and do it. The sorts of optimizations that this allows can reduce the interpretation overhead, but I can’t think of a practical and useful way it can turn that overhead around into an advantage.

 

tux0r

Compiled languages always have a superior performance and require much less resources at runtime. Interpreted languages are neat while debugging, but if they don’t have a compiler, they’re out for me.

 

What Are Your Thoughts?

 

Please follow and like us:
0

The Easiest Programming Language To Learn Is…

February 20, 2019 Posted by Programming 0 thoughts on “The Easiest Programming Language To Learn Is…”

I want to start this article by saying that choosing to learn or not learn any language due to its difficulty level is a bad start towards a career as a programmer. You’re eventually going to have to learn another language that may be harder than the “easy” language. That said, there is a solid argument for choosing a language with the most shallow learning curve. That language may make it easier for you to grasp paradigms like Object Oriented Programming, which is the bedrock of modern programming.

So, let’s define what makes a programming language easy before picking one out of a hat. These are just some parameters I came up with because at one point I wanted to learn how to program. I found that these parameters contributed towards my comprehension of programming.

What Makes A Language Easy To Learn?

 

An easy to learn language has an excellent community. This community creates an ecosystem of detailed documentation that helps you solve problems. It also provides help through forums(though stack overflow has given almost every language this aspect of community). Thirdly, the community provides modules that make your life as a developer easier. Again, many languages have this feature, but a certain language is infamous for hamfisting this feature(I’m looking at you JavaScript).

 

An easy to learn language comes with many built-in methods. This is a bit like the batteries included deal you get with toys. As someone new to programming in general, you’d want  the language to have the function required to complete a certain task rather than having to install a package or come up with a function of your own.

An easy to learn  language isn’t mangled by rules and syntactic nuances. What scares many people off is seeing strings of curly braces and semicolons and thinking, well, how am I ever going to be able to read that, more or less write it?

An easy to learn language doesn’t have amorphous functions. At the end of the day, functions are the bread and butter of programming languages. The name says it all. Apps won’t function without them. So, being able to grasp functions early is crucial. Some languages make learning functions simple because the way in which you write these functions rarely changes. Other programming languages have circus functions that differ according to context.

 

An easy to learn language has a powerful framework. At some point during your learning process, you’re going to want to make either a web app to test your skills or some basic software. That language should have a framework with documentation that will allow you to transfer your fledgling skills to said framework. At the same time, you don’t want to be drowned by frameworks. You want to be able to choose one and go without a second thought. This process is supposed to be easy, right?

 

What Is The Easiest Language To Learn?

 

So now that we’ve laid down some highly subjective rules about what makes a programming language easy to learn. Let’s pick a language out of the hat(It’s safe to say JavaScript developers aren’t holding their breath on this one).

Ruby is arguably the easiest language to learn. If we combine all the factors that make a language easy to learn, you’ll realize that Ruby has one of the best communities out there. RubyGems is an excellent package manager. Unlike npm, it’s not bloated. You only have to specify the gems you want to use in a .Gemfile and install the corresponding “gems.”

The Ruby community is unique in that its origin was humble prior to the explosion of Ruby on Rails. David A. Black, the author of The Well Grounded Rubyists, said in regards to the early Ruby community, “The Pickaxe was the first English-language book on Ruby (there were already many books in Japanese), and the Ruby community outside of Japan was small enough that it was possible to get to know people easily through the English-language mailing lists and forums — on which, I should add, many Japanese Rubyists, including Matz, participated regularly.”

If you’re wondering who Matz(Yukihiro Matsumoto) is, he’s the chief designer of Ruby. Black went on to say in the interview when responding to what his favorite feature of Ruby was, “It sounds corny but my favorite “feature” is the community. I’m less entwined in it than I used to be, but over the years it has been a great source of support, friendship, and inspiration.”

That community extends to the extensive and well-organized Ruby docs. Anything you need to know about a particular method  or function is there in the docs. Because Ruby has so many built in methods, you don’t need to install too many gems to perform tasks. Solving a particular problem is simply easier in Ruby because you don’t need to search for a clever workaround.

Syntactically, Ruby is one of the easiest to read languages. Compare it to a language like C++ and Ruby looks like some type of pseudocode. Function are called methods in Ruby and simply need the def keyword prepended to the name you want to give to your method. No curly braces required.

Finally, Ruby on Rails is a renowned framework. Not many other languages got propelled to fame like Ruby did because of a framework. Actually, you can’t go very long talking about Ruby without having Rails mentioned. The reason for this is that Rails turns you into a wizard, figuratively speaking; you can set up a functioning blog site with a few commands.

This is made possible due to the fact that the Ruby language allows its more skilled developers to create domain specific languages(DSLs) using the Ruby programming language. What this does for beginners is that a language that was already easy to understand becomes even easier to use because you have new abstractions that don’t require you to dig deeper.

This isn’t necessarily a good thing, but you can see how it’s much easier to feel competent with Ruby. While others might have to learn a bit of SQL to query a database, you simply need to learn Ruby’s much easier plug-and-play version called Active Record.

 

In The End

Choose whatever language gets you to accomplish your goal. If you just want to learn a language to show off your skills to friends and family, then choosing the easiest language to learn may be the way to go. Like the “bad” programming language question, the easiest programming language to learn boils down to what you want to do with the language in the first place.

Do you want to be a systems programmer?

Then, perhaps, Go might be the easiest language to learn just because the pool for systems programming languages is vastly different. It will be interesting to hear what others think the easiest programming language to learn is since “programming language” means different things to different people. I asked this question in an open forum and got, Brainf***, Scratch, SQL, and Java(?).

Please follow and like us:
0

Researchers Were Already Talking About AI Factories In The 80’s

February 19, 2019 Posted by Programming 0 thoughts on “Researchers Were Already Talking About AI Factories In The 80’s”

Almost everyone knows this saying: “history repeats itself.”

It’s amazing how true that statement is, especially when it comes to technology. We like to think that every new decade in the field of tech makes the previous decade completely obsolete. When we’re dealing with the minutiae of version updates and patches we can sometimes see the past with tinted glasses. There’s a great quote by Eleanor Roosevelt that expands on the idea of a continuous history,

“Great minds discuss ideas; average minds discuss events; small minds discuss people.”

Events and people come and go, but ideas span decades. So, it really shouldn’t be surprising that, in an article written in the mid 80’s, you get this:

“AI systems are expected to open vast new opportunities for automation in the office, factory, and in the home. In the process, many observers believe they will profoundly alter the way people work, live, and think about themselves.”

That sounds like something a blogger might have written yesterday. Wired has several pieces covering the “fourth industrial revolution,” which is just a term used to describe automated factories, the same factories Paul Kinnucan predicted would emerge in “Artificial Intelligence: Making Computers Smarter.”

See, many of us are riding the AI hype train, but perhaps we should exit for a few moments and explore AI’s past. Because AI is nothing new, and I’m not talking about Asimov’s Sci-Fi AI, I’m talking about ‘40’s-Turing-AI. You can read his work entitled, “Computing Machinery and Intelligence,” to get an idea of what one of the greatest minds in computing had to say about machine learning.

The Early Days

Early research into AI involved breaking away from the limitations of procedural programming so that machines could be more “human.” This was achieved through heuristic problem-solving. The computer would used formal logic(if/then statements)and semantic networks to group related ideas together. This allowed the computer to offer dynamic solutions by accessing a pattern of data to solve problems.

Yet despite these advances, AI wasn’t the runaway train that it is today, and that’s despite the fact that predictions made by science fiction authors before the 80’s seemed to be bearing fruit. That probably should’ve been enough to spark a craze, right?

Well, the problem with AI during the 50’ up to the early 80’s  is a bit like the “talking machine” hoaxes of the 19th century. Researchers made exaggerated claims about the capabilities of their models when what was underpinning them were simple algorithms. Back then AI researchers were fiercely debating whether or not machine intelligence should be patterned off  of human cognition. The entire idea of neural networks was actually more of a backwater in AI research before the mid to late 80’s. The guy who kicked the whole idea of neural networks off was Frank Rosenblatt. His “perceptron” was a data model that was able to recognize letter and numbers. This lead Rosenblatt to conclude in 1956,

“[The perceptron is] the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

That bold claim was shot down by Minsky who countered that it was impossible for the perceptron to learn nonlinear functions, which, at the time, was crucial to any serious progression in AI. Minsky also added the cherry on top by claiming that the perceptron couldn’t differentiate numbers and letters from other unrelated stimuli. This effectively ended most academic discussion about the viability of the perceptron. Unfortunately, it also cast a poor light on the field of AI.

It wasn’t until 1986 that the idea of multiple hidden layers was introduced to create a nonlinear approach to neural networks. With hidden layers, according to the universal approximation theorem, networks could learn any function. This eliminated Minsky’s strongest objection. To generate more accurate predictions, Geoff Hinton, David Rumelhart, and Ronald William wrote a paper entitled,  “Learning representations by back-propagating errors.” Errors would be cycled through an algorithm based on derivatives and corrected according to specified weights.

Today

In 2006, this approach was branded as deep learning in an attempt to propel  away from the “shallow” techniques of the past. Now, Deep Learning is the toast of Silicon Valley.

In the Interim between 1969 to 2006, What kept AI alive was persistent research from MIT and Carnegie Mellon, as well as funding from the U.S. Department of Defense.

Another crucial reason for the existence of modern AI  is technology. Many data scientists are familiar with the abstractions that TensorFlow provides when modeling data. But back in the 80’s, the problem was how to store the data in the first place. Expensive computers that filled entire rooms was a necessary part of data-gathering. This relegated AI research to the academic realm. As such, advancement could only go as far as the government was willing to let them go. It wasn’t until the semiconductor arrived on the scene that tech companies like Hewlett-Packard could invest in AI.

Fast forward to today and tech luminaries like Mark Zuckerberg can’t get enough of AI despite the fact that advancements are still chasing a moving goal post. That goal post is a reality in which AI can accurately identify stimuli at near 100% efficiency. So, perhaps, we’re repeating the mistakes of the past. Today’s deep networks may be yesterday’s perceptron, a human construct that was created with the ultimate goal of achieving an intellect superior to that of our own.

Resources

It’s important to note that this article pulled ideas from two major resources to make a larger point about how we’ve over-exaggerated the capabilities of AI. Again, Mark Zuckerberg thinks it will solve a myriad of problems, but you shouldn’t ask him how.

The first article is one of the most informative and engaging articles you’ll ever read about the history of neural networks. It was written by Andrew L. Beam and was titled, “Deep Learning 101 – Part 1: History and Background.”

The second article was written in the 80’s by Paul Kinnucan: Artificial Intelligence: Making Computers Smarter.”

 

 

 

Please follow and like us:
0

1983 Conference Program Was Already Concerned About Teaching Youth Programming

February 15, 2019 Posted by Programming 0 thoughts on “1983 Conference Program Was Already Concerned About Teaching Youth Programming”

I came across a conference program for “Computer-Using Educators” that took place in 1983. Even back then, terms like “computer literacy” were being thrown around when it came to educating students.

I thought the program might stop at basic computer operations and software uses, but there was a second section dedicated to implementing full-fledged computer science courses in schools as well as teaching seminar attendees languages of the time like Logo.

There was one program proudly titled, “Debuggers Seminar for Logo Users”. In that seminar users could learn more Logo which would probably then allow them to teach what they’ve learned to students. Another seminar called for implementing Logo and BASIC not for  teenagers alone, but for elementary school students from Kindergarten to 5th grade.

 

The big point that’s worth making here is that we’ve always seen the need to introduce computer science courses to students. Yet, decades have gone by, new languages have been developed, and still it’s newsworthy to actually have a school that has a k-12 computer science program. An Education Week article written a year ago mentioned that some classes even consider learning how to type on a computer as part of a computer science curriculum. Perhaps that’s why I’m so shocked by the seemingly radical proposals put forth in the 1983 program. Sure most of them were geared towards the educators, but the idea that fifth graders could be learning to use BASIC is stunning when it shouldn’t be the case. By fifth grade, students are learning grammar and are performing calculations that they would perform in programming languages.

Now more than ever, children as little as seven can manipulate user interfaces. The world is becoming one in which technology is inescapable. The need for computer literacy as it pertains to software use is a concern of the 80’s. Now, we can move on to program literacy whereby students can learn to develop their own apps to solve their own problems just as they gain textual literacy to communicate.

 

 

 

 

Please follow and like us:
0

New Text Writing AI Too Dangerous To Be Fully Released

February 15, 2019 Posted by Programming 0 thoughts on “New Text Writing AI Too Dangerous To Be Fully Released”

What would you impression be if the post you’re reading now was written by a machine? Researchers at Open AI have made prose-crafting bots more of a reality. According to them,

“We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training.”

Their data model is called GPT-2, a transformer based language model that the researchers claim have 1.5 billion parameters that is trained with a dataset of 8 billion web pages. They’ve allowed this model to have the capabilities of predicting the next word in a whopping 4o GB of text. Since the model isn’t relegated to one specific domain of data, say, books for example, the researchers claim that their model outperforms other models. Rather than limit their model, they flung open the doors of the Internet. As you’ll see below, some of their results aren’t that surprising considering that fact.

In their article, the research team provide a sample based on the prompt, which took ten tries to reach accuracy. It’s important to define what an accurate sample is according to language models. When judging whether or not a model can accurately parse text, data scientists know that large datasets often yield positive results. The test occurs when applying the model to a specific domain where there is a smaller sample of data to work on.

Interestingly enough, Open AI mentioned that GPT-2 was more consistent in providing accurate samples for pop culture topics like movies and celebrities, generating adequate samples 50% of the time. On the other hand, it performed poorly on more technical topics. Does this say something about our media-obsessed society that this AI had more to work with when it came to hot topics? Maybe. But the researchers offer a solution in what’s called “fine tuning.”

Since GPT-2 consumes a general dataset, what the researchers are finding is that their model is already nearly on par and even surpasses other models that account for domain-specific variables. If they were to fine tune the model, they may surpass other models in the area of reading comprehension and summation.

Already, this model is so good that the researchers fear that, in the wrong hands, the tool can be used for sophisticated phishing attacks and other malicious bot-related activities. We all know about the prevalence of deep fakes that take advantage of image models. A bevy of articles have been written about their uses and many have given instructions on how to spot one. Recently, we’ve had an outbreak of fake news. False data surrounds us.

In a statement about the pressing need to secure advances in AI, Open AI states, “We should consider how research into the generation of synthetic images, videos, audio, and text may further combine to unlock new as-yet-unanticipated capabilities for these actors, and should seek to create better technical and non-technical countermeasures.”

There may come a time when the first book written by AI is published. It may or may not be published under a pseudonym. It might be a terrible book, but then we’d be saying that machines are bad writers, thereby admitting to an extent that they’re capable of creativity, which is an eye-opening thought in and of itself.

To continue on the theme of reading comprehension and summation, you should read the full article on Open AI’s website. Perhaps their AI could have done a better at summarizing their amazing work.

Resource

Open AI article

 

 

 

Please follow and like us:
0

6 Developers Teach You How To Give A Coding Talk

February 13, 2019 Posted by Programming 0 thoughts on “6 Developers Teach You How To Give A Coding Talk”

Giving a presentation about a tech stack you’ve been working on with your team can be a frightening experience if you’ve never done it before. I remember having to be one of three presenting an app that my teammates and I built at a coding bootcamp. The difficult part about presenting was making sure that I didn’t spew technical jargon while still being able to communicate my part of the presentation. That meant researching the details of the app to the point that I knew enough not to rely on key phrases. My experience in giving a talk is very confined so I’ve dug up answers that developers have given to the question of how to perform coding talks with minimal experience.

 

Joe Zack

 

“I recommend going in person and talking to the speakers and organizers. Many meetups and conferences (particularly the free ones) prioritize new speakers so it’s just a matter of finding the right people in your area.

Events with “lightning talks” are a great place to start because you get to meet multiple speakers at once, and they’ll be able to help guide you to the right people.

As for finding the events themselves, I just launched a prototype of a site designed for just that: findTech.events (open-source on github)

I’ve only gotten a few hours in on the project so far, but the idea is that you will be able to search for conferences in your area, with your preferred techs, and if I can get the data – call-for-proposal dates.”

 

Molly Struve

 

It is all about the story, people want to hear a good story! This talk is what really inspired me to take a whack at speaking.

Couple ideas for getting started:

Definitely checkout local Meetups for speaking opportunities
Consider doing a lightening talk your first time.
If you have an opportunities to talk internally at your company, start there. I started out by giving a lot of presentations at my company, then graduated to Meetups, and eventually made it to the big stage at RubyConf.

 

Miklos Bertan

 

About giving talks:

  • Talk about the things you are passionate about. It doesn’t matter if you feel like it’s a cliche or over-represented topic. Experience and passion are way more important than the topic itself.
  • Keep the slides minimalistic and use images (but avoid videos). Explain everything in person through speech.
  • Practice. Make sure that the talk will not be too long or too short. Those are awkward.
  • Don’t over explain, it’s not a class. It’s better to give an impression about complex topics to the beginner half of the audience than to bore the experienced half. Beginners will pick up enough to dig deeper into the topic at home.

About getting signed up for talks:

I am only presenting at local meetups (for now) and I am satisfied with it. This is my experience so far:

  • Getting signed up to a conference is pretty difficult (for the first time at least).
  • Getting signed up for a meetup is a piece of cake. In my country, there is usually a shortage of speakers. Just write an email to a few meetup groups, they will be happy about it.
  • Meetups have a very friendly tone, it’s okay to be amateurish there.

I don’t think you have anything to worry about, it will be a nice experience. Good luck!

 

Patrick Tingen

 

I have given a couple of talks and my advice is this:

  • Use large images that fill your whole screen, don’t use borders. Large images look better than small ones
  • Don’t use too much text. Use at least font size 30. This will prevent you from pushing too much on the slide and at the same time even people in the back of the room can read it.
  • If you use text, use white text on a black background otherwise you will have a huge white rectangle of light on the wall behind you. Darkmode ftw!
  • Beware of giving live demos since they tend to go good all the time except when you’re actually doing your talk
  • Take your listeners on a journey; introduce a problem and why it is a problem. Then tell how this problem can be solved.
  • If you give your talk in English and it is not your native language, then rehearse, but also relax; no one will laugh at you if you make a mistake

 

Cameron Lepper

I’ve done a bit of public speaking on coding regularly over the last year. If there’s one single bit of advice I can give, it’s consider your demographic before you plan your presentation. It’s often overlooked, but is a crucial component.

I’ve presented coding and knowledge sharing in schools, universities and professional environments; by understanding your audience, the rest will follow.

Engaging your audience is so much easier when the content is appropriately accessible and digestible for the audience. I’ve found this out, ashamedly, the hard way!

I don’t really use notes, although I know some are more comfortable with an aide of some sort. However, what I do have is one small bit of card with ‘Speak Slower’ written in capitals. This is a visual prompt that catches my eye throughout the presentation, that the audience cannot see, but will usually let me go “ah, yeah, I’m speaking to quick”. It’s so easy to lose track of your verbal pace when presenting code – I find this particularly as I get towards the crux of my topic or conclusion that I’ve been excitedly building up to.

Anecdotes are useful, but don’t overuse them. Whilst anecdotes are a good way of maintaining engagement and justifying an opinion, they’re not explicitly objective and therefore should be used sparingly – or at least appropriately.

Also, when presenting something that requires some digestion, it’s OK to pause briefly. Incorporate it into your presentation, and it’ll be more natural.

 

 

Joe Mainwaring

Ignore the imposter syndrome that you might feel with public speaking, expertise is relative and based on our experiences. Focus on finding an audience that would benefit from hearing about your experiences instead – there’s always going to be someone looking to get to where you’re already at.

For giving a talk, I find starting with an outline is always the best approach to structuring my talk and flushing out the content for the different talking points. I always have a draft of an outline before I touch any presentation tools like powerpoint.

Presentations should be supporting aids to a talk, not the main content itself.

Lastly, practice giving your talk. Practice makes for perfect.

 

 

Please follow and like us:
0

How Systems Are Hacked With No Programming

February 13, 2019 Posted by Programming 0 thoughts on “How Systems Are Hacked With No Programming”

With the number of data breaches that have occurred over the past few years it’s easy to imagine that hackers are always coding in their terminals. That couldn’t be further from the truth. In order to hack a system effectively, you have to know people just by observing them. That’s because people make much more serious blunders than computers do. It takes much more time and effort to attack a secure system remotely than to pretend you’re the IT guy at a company and manually infiltrate a system.

Even as new attack vectors emerge with the rise of IoT and as cybersecurity becomes more and more of a buzzword, we shouldn’t forget that robots aren’t trying to hack systems; people are. There are many great articles that delve into the psychology of dress and what it says about a person.

That’s just at a superficial level.

For example, a tailored suit may say that a person is confident and successful. A hacker would go a level further. That suit, if it was pinstriped, might tell the hacker that this lawyer is an associate at a certain law firm in New York. The hacker then might pinpoint a list of law firms in New York whose partners wear pinstriped suits regularly to judge if wearing pinstriped suits is part of the firm culture. 

That was just one example of how one would think like a hacker. It’s all about gleaning data and then applying that data. We can then take this analogy further and say that the hacker can then wear a pinstriped suit, claiming to be an IT professional after finding out that IT professionals at this specific law firm like to impress the higher ups by adhering to the dress code. This is obviously just an example but people have been fooled by impersonations and other crafty social engineering.

One great way to become secure is not to have a password that relates to our occupational or personal lives. Just by looking at your affects, a good social hacker might be able tell where you work and what your role is. From there, the hacker can work their way up.

If you’re still wary about how hackers can infiltrate a system without any technology, this 2007 video from Schmoocon walks you through a hacker’s thought process. The hacker presenting is Johnny Long who’s known for using Google searches to hack vulnerable servers. 

 

Please follow and like us:
0

1976 Article Stops Short of Predicting Voice Assistants

February 13, 2019 Posted by Programming 0 thoughts on “1976 Article Stops Short of Predicting Voice Assistants”

Many of us have either watched or heard about movies featuring sentient machines taking over the world. Science fiction usually sets out to provide a model for the future, whether it be good or bad. Talking machines have invariably erred on the side of danger to mankind. Yet, despite the warnings of great scifi authors like Phillip K Dick, the waves of time carry us inexorably forward to a future where our machines do more than parrot.

This process didn’t start with Alexa or Google Home. It actually started in the 18th century when a man named Wolfgang von Kemplen built the speaking machine. According to James L. Flanagan, von Kemplen,

“constructed and demonstrated a more elaborate machine for generating connected utterances…It used a bellows to supply air to a reed which, in turn, excited a single, hand-varied resonator for producing voiced sounds. Consonants, including nasals, were simulated by four separate constricted passages, controlled by the fingers of the other hand. An improved version of the machine was built from von Kempelen’s description by Sir Charles Wheatstone (of the Wheatstone Bridge, and who is credited in Britain with the invention of the telegraph). It is shown below.” 


James L. Flanagan, “Speech Analysis, Synthesis and Perception”, Springer-Verlag, 1965, pp. 166-167.
Haskins Labratories

 

 

In 1820, Joseph Faber, inspired by von Kemplen’s speaking machine, invented one of his own called the Euphoria. He showcased his invention in London by letting his machine sing God Save The Queen. Levers were used to operate the device. British author David Lindsay noted that “by pumping air with the bellows… and manipulating a series of plates, chambers, and other apparatus (including an artificial tongue…), the operator could make it speak any European language.”

The Euphonia of Joseph Faber

 

The machine, one can say, was biologically engineered to replicate the mechanisms of human speech. Fast forward to the present and Google is now incorporating a more diverse range of voices tailored for various contexts. We’ve essentially replaced bellows with microprocessors.

An article written by Wirt Amter in 1976 details this evolution from von Kemplen and Faber to AI Cybernetic Systems. Reading this article is only a reminder of how far we’ve come in terms of computing. Machine language processes have now bypassed the need to manually encode entire dictionaries. We’ve learned that machines aren’t humans. So, models and data sets must exist to allow them to learn our language properly.

The likes of Bradbury assumed that machines would gain language but the how was not quite there and is still taking shape, albeit at a glacial pace.

To quote Atmar, “whether we see the computer becoming the benign and obedient servant of man or wildly out of control, we all tend to see the computer becoming more anthropomorphic, more human like in behavior and form.”

We’ve been psychologically preparing for an age of conversational machines. Imagine what Faber would’ve thought if he’d been told that one day a machine the size of your hand would respond to just the sound of your voice. Impossible! Now, we just shrug, thinking of this evolutionary concept as just another technical milestone on the timeline of humankind, a timeline that began with fire.

Reference

If you want to read the 1976 article that this piece is based on in its entirety click here. It’s titled “The Time Has Come To Talk” and it was featured in Byte Magazine Issue #12.

 

 

Please follow and like us:
0

Can You Learn a Programming Language By Majoring in Computer Science?

February 13, 2019 Posted by Programming 0 thoughts on “Can You Learn a Programming Language By Majoring in Computer Science?”

 

I’m going to answer this question by showing you a general list of books you might have to read if you decide to take the CS route of learning a language. If reading these books and understanding them proves difficult, then perhaps learning a programming language can be done through other mediums, like online courses.

Of course, programming is much more than knowing rules and syntax. A Computer Science curriculum trains you to think analytically. For example, a Discrete Structures course allows you to communicate in terms of logarithmic time so that you can sound competent in front of your future colleagues.

For someone who wants to learn, let’s say, how to build an indie game she’s dreamed of building since she was a child, she has the option of getting a CS degree.

That will involve getting slightly familiar with some of the books below. Note that the books below aren’t a comprehensive list. The list caters to C++ and contains textbooks that cater to the scenario of aspiring game developers.

The last section can easily be removed or swapped by any aspiring software engineer. The first section can also be swapped with any programming language taught at universities.

I should also mention that books aren’t terrible. You can certainly learn something by reading a few textbooks.

What is bad is thinking you’re going to become a programmer simply by sitting through a few classes and highlighting key points from a textbook. The restaurant mentality doesn’t work in software engineering or life in general. Theory is great because it makes you sound like an intellectual and may come in handy in a few situations, but nothing beats practise.

Also note: these books were sourced from the Open Syllabus Project. The website stores a database of syllabi from  various universities across the U.S. 

 

The Programming Language (C++)

 

C++ Primer (This is actually a really good book. I was able to use it as a reference to build a simple text-based game years ago.)

C++ : The Complete Reference

Starting Out With C++ : From Control Structures Through Objects

C++ Programming : Program Design Including Data Structures

The C++ Programming Language (This book was written by MR. C++ AKA Bjarne Stroustroup, the developer of C++.)

Problem Solving With C++ : The Object of Programming

C++ Plus Data Structures

Data Abstraction and Problem Solving With C++ : Walls and Mirrors

Visual C++ 2008 : How to Program (I doubt many people read this book. I could be wrong.)

C++ FAQs

 

Algorithms

 

Discrete Mathematics and Its Applications

Randomized Algorithms

Algorithms

Algorithm Design

The Algorithm Design Manual

 

Architecture

 

Software Architecture in Practice

Object-Oriented & Classical Software

Structured Computer Organization

 

Systems

 

Fundamentals of Database Systems

Database Management Systems

Operating System Concepts

Modern Operating Systems

Computer Networks : A Systems Approach

Operating Systems : Internals and Design Principles

 

 

Gaming

 

Fundamentals of Computer Graphics

3D Game Programming All in One

OpenGL Distilled

A Theory of Fun for Game Design

 

 

Please follow and like us:
0