Posts in News

Ruby 3 Sorbet

Ruby 3 Will Introduce Types

April 19, 2019 Posted by News 0 thoughts on “Ruby 3 Will Introduce Types”


Though Yukihiro Matsumoto, or “Matz,” has long been opposed to introducing types to Ruby, Ruby 3 will introduce types. The announcement was made in the Ruby Kaigi 2019 Press Conference. There, Stripe engineers Paul Tarjan and Jake Zimmerman demonstrated a type checker they created for Ruby. The type checker is called Sorbet and Stripe was able to successfully adopt it into their existing code base constituting millions of lines of code. What solidifies the announcement of types systems is the fact that Sorbet will be supported by Ruby from the ground up; Ruby 3 will standardize types in its stdlib source.

Here’s example code that you can find on


# typed: true
class A
 extend T::Sig
sig {params(x: Integer).returns(String)}
  def bar(x)
def main # error: Typo!“91”) # error: Type mismatch!


The project to typify Ruby has been years in the making. In 2015, Matz first announced Ruby 3×3, the goal being to make Ruby 3 become 3 times faster than Ruby 2. In an interview a year later, Matz said:

 In the design of the Ruby language we have been primarily focused on productivity and the joy of programming. As a result, Ruby was too slow, because we focused on run-time efficiency, so we’ve tried to do many things to make Ruby faster. For example the engine in Ruby 1.8 was very slow, it was written by me. Then Koichi came in and we replaced the virtual machine. The new virtual machine runs many times faster. Ruby and the Ruby community have continued to grow, and some people still complain about the performance. So we are trying to do new things to boost the performance of the virtual machine. Even though we are an open source project and not a business, I felt it was important for us to set some kind of goal, so I named it Ruby 3×3. The goal is to make Ruby 3 run three times faster as compared to Ruby 2.0. Other languages, for example Java, use the JIT technique, just in time compilation; we don’t use that yet in Ruby. So by using that kind of technology and with some other improvements, I think we can accomplish the three times boost.

In order to boost Ruby’s speed, Matz also tossed around optimizing caching methods and implementing currency into the core Ruby language.

What he didn’t mention in that interview was a type system. The inclusion of the type system was perhaps an inevitable concession Matz was going to have to make. Part of Ruby’s philosophy is having multiple ways to do things so as not to restrict the user. However, this philosophy causes complications when apps built with Ruby grow in size. Properly documenting the code becomes difficult and the inherit slowness of dynamic typing hampers speed. In order to achieve the 3×3 goal, typing was probably needed.





Please follow and like us:
Microsoft's Bosque

Microsoft Releases Bosque Programming Language

April 17, 2019 Posted by News 0 thoughts on “Microsoft Releases Bosque Programming Language”

In a research paper written by Mark Marron and recently published by Microsoft, Marron introduces a new paradigm that involves “lifting the model for iterative processing away from low-level loop actions, enriching the language with algebraic data transformation operators, and further simplifying the problem of reasoning about program behavior by removing incidental ties to a particular computational substrate and indeterminate behaviors.” This new paradigm is termed regularized programming and is supposed to revolutionize software development in the same way structured programming and abstract data types did in the 70’s.

Marron claims that these improvements will come in the form of better “software quality, programmer productivity, and compilers/tooling.” The Bosque language is then used as an example of a programming language built with a raguarized model in mind, giving it the ability to “[eliminate] major sources of errors, [simplify] code understanding and modification, and [convert] many automated reasoning tasks over code into trivial propositions.”


What is Bosque?

Bosque is a combination of TypeScript in syntax and ML and JavaScript in semantics. In short, the language will be familiar to anyone who’s ever built a front end app. The language includes nominal types, structural types, and combination types.

function nsum ( d : I n t , . . . a r g s : L i s t [ I n t ] ) : I n t {
return a r g s . sum ( d e f a u l t =d ) ;
function np ( p1 : I n t , p2 : I n t ) : {x : I n t , y : I n t } {
return @{x=p1 , y=p2 } ;
/ / c a l l s with e x p l i c i t arguments
var x = nsum ( 0 , 1 , 2 , 3) ;
var a = np ( 1 , 2) ;
var b = np ( p2 =2 , 1) ; / / same as a
var c = np ( p2 =2 , p1 =1) ; / / a l s o same as a
/ / c a l l s with spread arguments
var t = @[1 , 2 , 3 ] ;
var y = nsum ( 0 , . . . t ) ; / / same as x
var r = @{p1 =1 , p2 =2};
var d = np ( . . . r ) ; / / same as a


What differentiates Bosque from JavaScript and its supersets is the fact that it provides “specialized bulk algebraic data operations and integrated support for none (or optional data) processing.”

Bulk Algebraic Data Operations

Marron gives examples of both bulk algebraic data operations and none processing. Here’s how he outlined what bulk algebraic data operations look like when used with Bosque:

 “The bulk algebraic operations in BOSQUE start with support for bulk reads and updates to data values. In addition to eliminating opportunities to forget or confuse a field the BOSQUE operators help focus the code on the overall intent, instead of being hidden in the individual steps, and allow a developer to perform algebraic reasoning on the data structure.”

 @[7 , 9]
(@[7 , 8 ] ) <~(0=5 , 3=1) ; / / @[5 , 8 , none , 1]
(@[7 , 8 ] ) <+(@[5]) ; / / @[7 , 8 , 5]
(@{ f =1 , g =2})@{f , h } ; / / @{f =1 , h=none }
(@{ f =1 , g =2}) <~( f =5 , h =1) ; / / @{f =5 , g =2 , h=1}
(@{ f =1 , g =2}) <+(@{ f =5 , h =1}) ; / / @{f =5 , g =2 , h=1}
Baz@identity ( 1 )@{f , h } ; / / @{f =1 , h=t ru e }
Baz@identity ( 1 )@{f , k } ; / / error
Baz@identity ( 1 ) <~( f =5) ; / / Baz@{f =5 , g =1 , h=t ru e }
Baz@identity ( 1 ) <~(p =5) ; / / error
Baz@identity ( 1 ) <+(@{ f =5}) ; / / Baz@{f =5 , g =1 ,


“None” Processing

In Bosque, none values are equivalent to null or undefined. Rather than following JavaScript’s truthy coalescing, Bosque uses both  “elvis operator support for all chainable actions and specific
none-coalescing. ”

@{}.h / / none
@{}.h . k / / error
@{}.h ? . k / / none
@{h = {}} . h ? . k / / none
@{h={k =3}}. h ? . k / / 3
f u n c t i o n d e f a u l t ( x ? : I n t , y ? : I n t ) : I n t {
return ( x ? | 0) + ( y ? | 0) ; / / d e f a u l t on none
d e f a u l t ( 1 , 1) / / 2
d e f a u l t ( 1 ) / / 1
d e f a u l t ( ) / / 0
f u n c t i o n check ( x ? : I n t , y ? : I n t ) : I n t ? {
return x ?& y ?& x + y ; / /

Atomic Constructors

The language also uses atomic constructors to regularize development. This is achieved by using  “direct field initialization to construct entity (object) values.”

concept Bar {
f i e l d f : I n t ;
f a c t o r y d e f a u l t ( ) : { f : I n t } {
return @{f =1};
e n t i t y Baz p r o v i d e s Bar {
f i e l d g : I n t ;
f i e l d h : Bool = t ru e ;



A Side By Side Comparison

It’s when JavaScript and Bosque are compared side to side that you can see what Marron means by stripping away “accidental complexity.” As a regularized programming language, Bosque aims for a declaritiveness and conciseness not found in JavaScript and other structured programming languages.


JavaScript vs Bosque



Bosque presents an interesting paradigm shift in a crowded field of programming languages. It will be worthwhile watching how the language matures as an ecosystem of tooling develops around it.  On their website, Microsoft has made the new programming language available to open source contributors under an MIT license. You can explore Bosque in Microsoft’s GitHub repo.


Please follow and like us:

Emacs 26.2 Releases With Ability to Build Modules Outside Source Tree

April 15, 2019 Posted by News 0 thoughts on “Emacs 26.2 Releases With Ability to Build Modules Outside Source Tree”

Emacs 26.2 was released today and perhaps the biggest improvement to the extensible editor is the ability to build Emacs modules outside of the Emacs source tree. This will allow users to be able to create more “modular” modules rather than having extensions tightly coupled to the source. Other major updates include the ability to compress files with a simple ‘Z’ command in Dired and compliance with version 11.0 of the Unicode Standard.

There are several other minor updates that we’ll include below. You can find a complete list of changes here.


 Installing Emacs now installs the emacs-module.h file

The emacs-module.h file is now installed in the system-wide include directory as part of the installation. This allows to build Emacs modules outside of the Emacs source tree.

New variable ‘xft-ignore-color-fonts’

Default t means don’t try to load color fonts when using Xft, as they often cause crashes. Set it to nil if you really need those fonts.

Mailutils movemail will now be used if found at runtime.
The default value of ‘mail-source-movemail-program’ is now “movemail”. This ensures that the movemail program from GNU Mailutils will be used if found in ‘exec-path’, even if it was not found at build time. To use a different program, customize ‘mail-source-movemail-program’ to the absolute file name of the desired executable.

New vc-hg options.
The new option ‘vc-hg-parse-hg-data-structures’ controls whether vc-hg will try parsing the Mercurial data structures directly instead of running ‘hg’; it defaults to t (set to nil if you want the pre-26.1 behavior).

The new option ‘vc-hg-symbolic-revision-styles’ controls how versions in a Mercurial repository are presented symbolically on the mode line. The new option ‘vc-hg-use-file-version-for-mode-line-version’ controls whether the version shown on the mode line is that of the visited file or of the repository working copy.

Display of Mercurial revisions in the mode line has changed.
Previously, the mode line displayed the local number (1, 2, 3, …) of the revision. Starting with Emacs 26.1, the default has changed, and it now shows the global revision number, in the form of its changeset hash value. To get back the previous behavior, customize the new option ‘vc-hg-symbolic-revision-styles’ to the value ‘(“{rev}”)’.

shadowfile config files have changed their syntax.
Existing files “~/.emacs.d/shadows” and “~/.emacs.d/shadow_todo” must be removed prior using the changed ‘shadow-*’ commands.

 ‘thread-alive-p’ has been renamed to ‘thread-live-p’.
The old name is an alias of the new name. Future Emacs version will obsolete it.

‘while-no-input’ does not return due to input from subprocesses.
Input that arrived from subprocesses while some code executed inside the ‘while-no-input’ form injected an internal buffer-switch event that counted as input and would cause ‘while-no-input’ to return, perhaps prematurely. These buffer-switch events are now by default ignored by ‘while-no-input’; if you need to get the old behavior, remove ‘buffer-switch’ from the list of events in ‘while-no-input-ignore-events’.

The new function ‘read-answer’ accepts either long or short answers
depending on the new customizable variable ‘read-answer-short’.

New function ‘assoc-delete-all’.
Like ‘assq-delete-all’, but uses ‘equal’ for comparison.

The function ‘thing-at-point’ behaves as before Emacs 26.1.
The behavior of ‘thing-at-point’ when called with argument ‘list’ has changed in Emacs 26.1, in that it didn’t consider text inside comments and strings as a potential list. This change is now reverted, and ‘thing-at-point’ behaves like it did before Emacs 26.1.



Please follow and like us:

Blueprint For Offline First Mobile Apps

April 15, 2019 Posted by News, Programming 0 thoughts on “Blueprint For Offline First Mobile Apps”

A typical web-based desktop app will be rendered in a browser such as Google Chrome on a desktop or laptop. Every user request or screen will result in an associated REST call to the server, and the associated JSON will be retrieved and transformed into a screen update. For example, when a user loads a todo app, the server will issue a GET request to todo/list which will return a JSON array. If the user requests the todo detail another request to /todo/{id} will be issued.

A typical mobile app works very similar to a typical web app, where each request will in turn require one or more REST calls, and the subsequent REST response will be transformed into native views. Web applications are primarily bound by the limitations that the browser sandbox ecosystem places upon them. Native Android and iOS have much less restrictions and have the opportunity to support more complex, richer and featured applications.


When accessing webapps via laptops and desktops are connected to reliable Wifi and Ethernet connections. The user can assume a fast, low latency, reliable network with minimal compromises. The same cannot be said for mobile users. A mobile user’s experience may experience high latency, minimal or no coverage. One of the key advantages of native application experience over mobile web is the ability to intelligently support a seamless offline experience. Users can experience a compromised experience due to a variety of factors such as loss of server issues, poor signal or high latency. Every user has experienced the frustration of using mobile web app native which abruptly ceases to work or worse when they enter buildings or underground structures such as the subway system.

Another key differentiator between an offline first and online only app is performance and battery. To load a given screen the app will first display a loading indicator, queue a network request and wait for the completion response from server before parsing and updating the screen. The total time to complete the transaction will be the sum of the network request, server processing and response time. Requests that are dynamically generated on server may need to query additional services or databases. Between the request and update of the UI, the user may need to wait several seconds. An offline first application, however, can query a local datasource nearly instantaneously resulting in an optimal user experience. Optimizing the apps CPU, radio and screen usage can maximize battery life. Reducing, deferring, and coalescing network requests can also result in improved battery life of the application.

One of the best examples of an offline first experience is the Gmail application. The app will optimistically synchronize data both upstream and downstream. Regardless of the total number of emails stored in the user’s inbox, the network usage will be will bound to the size of the change from the last sync. If the connection fails during the synchronization the application will continue where it left off.

Apps that support a seamless experience when a user transitions between an online to offline state without compromising the experience are known as offline first applications. As the app ecosystem matures the cost of building and supporting a native app needs to do more than simply render content JSON to screens. Modern mobile consumers are accustomed to a growing number of high-quality apps that “just work”. Apps that are more reliable and respect the user’s battery and network offer a competitive advantage. Supporting offline first functionalities in mobile apps requires cooperation from both client and server and conscious decisions when designing and building the applications that will be discussed further. Many applications may use a combination of online-only, cache and offline first approaches where appropriate.

The discussion can be summarized by the following principles:

Offline first Principles

1. The network and/or server is not reliable: Reliable network, low latency and high availability servers are not the norm for a mobile experience. Offline first mobile apps assume the user is offline

2. Fetching network resources is slow: Fetching resources over the network such as a JSON resource will always be slower than fetching from a local source particularly if the resource is dynamically generated

3. Seamless Transition: The app may notify the user about the current network status unobtrusively but should not prevent them from completing their mission. When connection to server is re-established, the app should seamlessly detect the change and continue synchronizing without intervention

4. Queuing: All requests that require network access such as download requests and mutations should be queued and performed in background. Not all queued requests are equal. A request to see the latest data in an active session has higher priority than synchronizing supporting data such as config flags

5. Checking the network state alone is not enough: Online state can only be determined by successfully pinging a controlled server and receiving an expected response. Users may be proxied at public Wifi gateway, the server may be unavailable or there may be connection but high latency

6. Modern apps respect the user’s battery and network: The application should respect the user’s battery and network and only synchronize the data that has changed from the last synchronization, and only when notified of changes from server first. Low priority requests can be delayed and processed in a batch to avoid waking radio


Caching is a technique to retain the results of an expensive operation so subsequent requests of the same type can be served faster. Although caching is not a replacement for a complete offline only solution it is often a pragmatic first step before implementing a push based offline system. The caching layer can be implemented using a variety of algorithms such as Least Recently Used (LRU), First in First Out (FIFO) or other. Many applications may use a hybrid of online-only, cache and offline first approaches to support various use cases.

Publish/Subscribe Pattern

To support an offline first application mobile devices, need to efficiently synchronize large amounts of data from the server. The publish/subscribe or pubsub pattern is a form of asynchronous service to service communication commonly used in serverless and microservice architectures. The pub/sub architecture can be used to develop fault tolerant data replication such as synchronizing a server and mobile app database.

The pub sub pattern has several key advantages over the caching design described above:

– Only the deltas need to be synchronized. If the system contains 1 million records and 100 of them have changed since the last synchronization, synchronizing all data can be an expensive operation. Caching will fetch all data or nothing depending on the invalidation configuration

– Push based. When data is modified on server, it can send a push notification to all relevant subscribers via GCM or APNS avoiding pre-emptive network requests reducing network and battery

– Cache invalidation strategies can be complex and involve various tradeoffs that can result in excess network/CPU or not receiving timely updates

Publish Stream

The data on the server can be represented as a time series of transaction mutations. The primary data store will always contain the source of truth. Only the master data source can be mutated, all other nodes will be receive read only subsets from the primary node. Mutations will be one of create, update or delete operations. The mutations can modeled as a stream of immutable events or messages. The stream of mutation events will be known as the publish stream. Any point in time can be represented by replaying the events in order from the stream.

Example Publish Stream for a Todo app:

1. Add { id: 1, userId: 1, title: “Grocery Shopping”, isComplete: false }

2. Update { id: 1, userId: 1, title: “Grocery Shopping”, isComplete: true }

3. Add { id: 2, userId: 2, title: “Do Taxes”, isComplete: false }

4. Delete { id: 1 }

5. Add { id: 3, userId: 1, title: “Write Medium Article”, isComplete: false }

If the entire stream is replayed the final state will be:

1. { id: 2, userId: 2, title: “Do Taxes”, isComplete: false }

2. { id: 3, userId: 1, title: “Write Medium Article”, isComplete: false }

One important item to note is that the final dataset contains records from both user 1 and user 2. In a typical Todo app users will only want to subscribe to their data. The above data could thus be written to multiple different streams such as “My Todos”, or “My Incomplete Todos”. Designing and setting up publish streams will be covered in more detail in a future article.

Subscribe Stream

The primary node (server) will be responsible for publishing one or more streams of all mutations. The data can be replicated to the replicated nodes (mobile devices) by subscribing to the appropriate streams. The stream can be replayed and interrupted at any point and will always represent the system at a point in time. When a mobile device mutates data the server will perform the mutation on the primary node and write it the transactions to the appropriate streams. The client can subscribe to an event stream and receive the events asynchronously. As the events are received from the subscription the mobile app can persist the events to the local data store such as SQLite. Another key attribute is the stream can be interrupted and scheduled to resume at any point based on network, battery or when the application loses focus or device goes to sleep.

In the following example the Master node will synchronize various records from Node 1 (Master) to Node2 (Online Slave) and Node 3 (Offline Slave). As mutations are made in Node 1 they are immediately reflected into Node 2. The changes to Node 3 are queued and synchronized when a connection can be established.

Offline Mutations and Conflict Resolutions

When users wish to change data on the mobile device such as an add or update, the transaction must be queued. The transaction can be queued locally and sent to server when appropriate in the background. The local data cache can be mutated immediately to ensure user sees the latest data. If the data is shared and can be modified by multiple users such as a group the developer will need to determine how to resolve the conflict.

1. Last write wins: The last write to the system will overwrite any previous writes. If multiple users are writing to the same data they may be processed out of order. This strategy is useful if the data can be modified multiple times such as a text field

2. First write wins: The first write to the system will mutate the data. Subsequent writes can either ignore the transaction or return an error. This strategy is useful when making edits that can only occur once such as a status change

3. Merge: Subsequent writes to the data will intelligently modify the data so both requests are applied


Many applications may use a combination of online-only, cache and push based synchronization to achieve the optimal user experience. Push based synchronization primary use case occurs when the user can subscribe to a defined data set such as “My Todos”. Many applications may use a combination of offline sync, caching and online only to achieve the optimal user experience.

In this article, we investigated the various types of apps such as online only REST based, caching and pubsub based mobile applications. We set the design and groundwork to build an offline push-based app in a technology agnostic way. The next series of articles will go through building a simple Todo based application server and client and make specific technology choices. Source


Please follow and like us:

Coders name Elon Musk the Most influential person in tech for 2019

April 12, 2019 Posted by News, Programming, Technology 0 thoughts on “Coders name Elon Musk the Most influential person in tech for 2019”

Elon Musk will have the most influence in the world of technology this year, according to an exhaustive survey of programmers.

In the Stack Overflow Developer Survey, 30.2% of respondents said they thought the SpaceX and Tesla CEO would have the most influence in the field in 2019.

In second was Amazon CEO Jeff Bezos, with just 7.2%. Microsoft CEO Satya Nadella came in third with 4.4% of the vote.

The survey saw almost 90,000 programmers respond from around the world, making it the largest of its kind across the globe.

Elon Musk influence remains despite rocky few months

The survey indicates that the near cultish status surrounding Musk has not waned, despite a number of incidents that risked denting his publish image.

At the end of 2018 Musk was accused of fraud by The US Securities and Exchange Commission (SEC) over what the organisation deemed “false and misleading tweets” regarding taking Tesla private.

In the fallout from the incident, Musk was forced to step down as chair of Tesla, although maintained his position as CEO.

In February Musk fell afoul of the SEC again on Twitter, when he reported Tesla production figures over the social media platform in apparent contravention of his previous settlement.

Meanwhile, Tesla has faced a somewhat fraught time, with the company having to shed almost a tenth of its workforce in 2018 over a botched automation attempt.

However, Musk has also had numerous public image wins that have maintained his positon as a highly liked figure within the world of technology.

SpaceX, in particular, has had a successful time, with the Crew Dragon completing its first unmanned test mission ahead of transporting humans to the International Space Station.

Of course, whether Elon Musk can retain his position of widespread adoration in the tech world remains to be seen, but with little competition from other big personalities he is unlikely to be unseated anytime soon.

Please follow and like us:

Survey: Younger Coders Most Likely To Appreciate Blockchain

April 12, 2019 Posted by News, Programming 0 thoughts on “Survey: Younger Coders Most Likely To Appreciate Blockchain”

A survey of tens of thousands of coders and programmers shows that one in five currently uses blockchain technology, but that could be set to rise, as nearly 30 percent said it is “useful across many domains and could change many aspects of our lives.”

The survey was conducted by question-and-answer site Stack Overflow and involved nearly 90,000 respondents in total. Most were professional coders or students preparing for that kinds of career.

More than 1 percent of respondents—which means more than 600 of them—said they were implementing their own cryptocurrency.

Unsurprisingly, young coders are more likely to think blockchain is going to be important in the future, while their older colleagues are still skeptical. A notable minority of respondents said blockchain was a passing fad (17 percent) and an irresponsible use of resources (16 percent). Opinion was dividedjust over a quarter acknowledged blockchain was “useful for immutable record keeping outside of currency.”

Among its other results, the survey confirmed an enormous gender disparity among developers: in the US, more than 88 percent of respondents were male. That was the lowest proportion found in any country. The most male-dominated roles were managers, administrators and executives. The roles most inclusive of women and other gender identities included researchers, analysts and scientists.

Please follow and like us:

Matrix Confirms That Hacker Stole Encrypted Passwords

April 12, 2019 Posted by News 0 thoughts on “Matrix Confirms That Hacker Stole Encrypted Passwords”

Matrix, the open network for secure, decentralized communication, was hacked yesterday. The attackers gained access to the servers hosting Matrix and were then able gain privilege to the production infrastructure by exploiting a vulnerability in Jenkins CI server and then hijacking credentials(SSH logging).

What got compromised? Matrix believes that unencrypted message data, password hashes and access tokens may have been effected.  What was left uncompromised were:

  • Source code and packages
  • servers
  • Identity server data


The damage may have been a lot worse had it not been for @jaikeysarraf who tipped Matrix off to the Jenkins vulnerability. It was then that Matrix’s investigators realized the full scale of the attack. They were then able to isolate the problem, remove Jenkins, and save the other machines.

This isn’t the first time that Jenkins has posed a major security threat to servers due to credential hijacking. In 2018, ZDNet reported that thousands of servers were vulnerable because two vulnerabilities allowed hackers to gain admin rights using invalid credentials on victims’ servers. The vulnerabilities were patched, however.

In this case, to be fair to Jenkins, the vulnerabilities exist in the plugins or dependencies used by Jenkins. Here are the 3 plugins affected according to MIST.

  • A sandbox bypass vulnerability exists in Script Security Plugin 2.49 and earlier … that allows attackers with the ability to provide sandboxed scripts to execute arbitrary code on the Jenkins master JVM.
  • A sandbox bypass vulnerability exists in Pipeline: Declarative Plugin 1.3.3 and earlier … that allows attackers with Overall/Read permission to provide a pipeline script to an HTTP endpoint that can result in arbitrary code execution on the Jenkins master JVM.
  • A sandbox bypass vulnerability exists in Pipeline: Groovy Plugin 2.61 and earlier … that allows attackers with Overall/Read permission to provide a pipeline script to an HTTP endpoint that can result in arbitrary code execution on the Jenkins master JVM.


Yesterday, Matrix was cautious about making any definitive statements as to whether or not sensitive data had been stolen or downloaded; but today, they’ve been able to provide an update that details exactly how the attacker compromised their machines.

At around 5am UTC on Apr 12, the attacker used a cloudflare API key to repoint DNS for to a defacement website ( The API key was known compromised in the original attack, and during the rebuild the key was theoretically replaced. However, unfortunately only personal keys were rotated, enabling the defacement. We are currently doublechecking that all compromised secrets have been rotated.

Later on, Matrix confirms that encrypted password hashes were stolen.

The defacement confirms that encrypted password hashes were exfiltrated from the production database, so it is even more important for everyone to change their password. We will shortly be messaging and emailing all users to announce the breach and advise them to change their passwords. We will also look at ways of non-destructively forcing a password reset at next login.


In the aftermath, Matrix promises to beef up the security of their production infrastructure. In the case of tools like Jenkins, that calls for more frequent vulnerability checks(all of the vulnerabilities in the NIST’s database were last modified on January 22, 2019).


Please follow and like us:
Google Cloud

Google Responds To Critics By Open Sourcing Google Cloud Platform

April 11, 2019 Posted by News 0 thoughts on “Google Responds To Critics By Open Sourcing Google Cloud Platform”

When Google fully released Anthos and priced it at $10,000/month per 100 vCPU block, developers weren’t exactly brimming with joy. Many questioned the direction of Google Cloud’s CEO Thomas Kurian in light of the fact that he mentioned wanting to use Oracle and Amazon’s playbook; AWS has notoriously abused open source principles. In recent times, however, Google has actively taken part in the development of Go, Kubernets, Tenserflow, Firebase and many more projects. So, perhaps to reinforce Google’s image as a supporter of open source,  Google Cloud  has announced that they would be extending their cloud support to even more open source projects.

Here’s a statement put out by their PR team:

We’ve always seen our friends in the open-source community as equal collaborators, and not simply a resource to be mined. With that in mind, we’ll be offering managed services operated by these partners that are tightly integrated into Google Cloud Platform (GCP), providing a seamless user experience across management, billing and support. This makes it easier for our enterprise customers to build on open-source technologies, and it delivers on our commitment to continually support and grow these open-source communities.

These following projects will be given Google Cloud’s support:

  • Confluent
  • DataStax
  • Elastic
  • InfluxData
  • MongoDB
  • Neo4j
  • Redis Labs

By supporting these database projects, Google Cloud hopes to benefit the vast  number of apps that depend on the open source technologies listed above. According to Google, the benefits include:

  • Fully managed services running in the cloud, with best efforts made to optimize performance and latency between the service and application.
  • A single user interface to manage apps, which includes the ability to provision and manage the service from the Google Cloud Console.
  • Unified billing, so you get one invoice from Google Cloud that includes the partner’s service.
  • Google Cloud support for the majority of these partners, so you can manage and log support tickets in a single window and not have to deal with different providers.


Only time will tell if Google maintains its promise to create an open partnership with open source communities. What we can takeaway however is that even notoriously private companies like Microsoft have seen the need to join open source. Skepticism aside, more support can’t hurt.


Please follow and like us:

In 3 years these high-paying tech jobs pay six-figure salaries

April 11, 2019 Posted by News, Recruiting, Technology 0 thoughts on “In 3 years these high-paying tech jobs pay six-figure salaries”

Earning a six-figure salary might not be as far from reach as you may think.

Entry-level positions for data scientists, product managers, and developers could pay $100,000 or more, according to a new study from Comparably, a website that rates workplace culture and compensation based on self-reported data.

In its latest study, Comparably analyzed the salaries of employees in the technology industry with three years of experience or less, which it evaluated from more than 8,000 employee records. The highest salary on the list was for the position of data scientist, which Comparably’s report indicates has an average entry-level salary of $113,254.

Comparably’s results share some similarities with data published by job search platform Monster, which listed web and software developer positions in its list of the highest-paying entry-level tech jobs.

The findings also underscore the increased emphasis on technical skills over traditional experience in the technology sector. Apple CEO Tim Cook even recently said that about half of the company’s US employment last year was made up of people who did not have a four-year degree.

See below for the top 10 highest-paying entry-level jobs in tech, according to Comparably.

10. QA Analyst

Average Salary: $70,383

A QA analyst looks for issues in websites and software and is responsible for making sure those problems are corrected, according to Comparably’s description. Duties and responsibilities could include conducting software audits and making recommendations for repairing defects, as a sample listing from ZipRecruiter notes.

9. Marketing Manager

Average Salary: $70,392

A marketing manager typically serves as the median between the IT department and marketing division, says Comparably. As is the case with many of the positions on this list, the day-to-day duties and responsibilities will differ depending on the employer.

8. Sales Representative

Average Salary: $70, 622

At a technology company, a sales representative’s goal would be to cultivate sales with potential clients, Comparably says in its description. This could entail giving presentations about the company’s tech products and services.

7. UI/UX Designer

Average Salary: $84,841

Employees in this role are responsible for a website’s user experience, including making sure that it adheres to the company’s vision, as Comparably notes. Responsibilities for a position like this could include designing elements like menus and widgets and illustrating design ideas through storyboards, according to a sample job description from Workable.

6. DevOps Engineer

Average Salary: $89,300

A DevOps engineer typically manages software development and automates systems, says Comparably. Testing implemented designs, handling code deployments, and building and testing automation tools are all duties that could fall under a DevOps engineer’s responsibilities, according to ZipRecruiter.

5. Sales Engineer

Average Salary: $90,575

The role of the sales engineer is to sell tech services and products through sales and technology, according to Comparably. In this role, you may be expected to establish a rapport with customers and potential customers to identify service requirements and prepare cost estimates by working closely with engineers and other technical personnel, according to a sample job listing from Monster.

4. Mobile Developer

Average Salary: $98,317

A mobile developer, as the title implies, works on applications for mobile devices. In this role, you may be required to design interfaces, troubleshoot and debug the product, and support the entire app life cycle from concept through post-launch support, according to a sample job listing from Workable.

3. Developer

Average Salary: $100,610

A developer designs and tests software, as Comparably notes. Responsibilities will vary depending on the type of developer job and the company. But a sample description from Indeed indicates a software developer role would entail writing, editing, maintaining, and testing computer software.

2. Product Manager

Average Salary: $106,127

This type of role usually involves planning different stages of a product’s development and rollout and then maintaining that product post-launch, according to Comparably. This could involve conducting market research, determining specifications and production timelines, and developing marketing strategies, says Monster.

1. Data Scientist

Average Salary: $113,254

A data scientist gathers insights by using tools to mine through large amounts of data, according to Comparably’s description. Employees in this role typically use these insights to deliver data-driven solutions to business problems, according to a Glassdoor sample job listing.

Please follow and like us:

Notorious Hacking Forum And Black Market Darkode Is Back Online

April 11, 2019 Posted by News, Programming 0 thoughts on “Notorious Hacking Forum And Black Market Darkode Is Back Online”

In 2015, over 70 people were arrested in a high profile takedown of one of the world’s most notorious hacking forums and black market, Darkode (aka Dark0de). In operation since 2007, the online marketplace included hacking tools, zero-day exploits, stolen data, spamming and botnet services. It was used by groups including the infamous Lizard Squad – the teen hackers known for hitting services such as X-Box Live and the PlayStation Network with distributed denial of service (DDoS) assaults.

Since the FBI and Europol operation to take down the Darkode site, there appear to have been several attempts to revive it. Notably, in late 2016, alleged former members of the site tried to bring it back in a diluted form, according to Motherboard.

Now, just over two years later, the site is back again – and this time under new ownership. “Today Dark0de consists of tools, exploits, 0days, accounts that have been cracked, configs for tools, and email/password combinations all available to the public,” a hacker called Ownz told me in an online chat.

The revamped Darkode forum’s Twitter account description reads: “A cybercrime forum and public market offering that serves as a venue for the sale & trade of hacking services, botnets, malware, and illicit goods and services.”

“The long-term goal is to revive Dark0de and hopefully it will become one of the best hacking forums once again,” Ownz says. “If you are wondering why we waited three years to open this service back up: on a federal level we believe Europol and the FBI were still investigating and looking for more hackers to arrest.”

Please follow and like us: