Home / Lens Team

Lens Team

Millennials or Test Dummies

Millennials or Test Dummies

By: Kyle Guin // Marketing Lead 

My name is Kyle, I am 22 years old and attend the University of New Mexico. Much like my friends, my social world is dominated by online platforms. Everyday I’m surrounded by Instagram, Snapchat, Facebook, and Google just to name a few. However, I have noticed that FOMO (fear of missing out) is a very real thing. We all seem struggle with the true purpose of these applications. Is this tech meant to make me feel more connected with others or do we feel more disconnected with ourselves? Its seems to me that we’re in a experimental phase… some of us are highly addicted to this virtual world, while others are hardly connected. A select few of us have found the right balance to positively influence our lives.

You see, society has talked down to us millenials for our habits with social media and our relationships with our devices… This is funny to me because our generation is the test dummy for this new world. A world where we have the opportunity to live two lives, one in the digital world and one in the real world. Our digital lives can be whatever we want them to be. On Facebook, I’m the golden child that every parents wants, on Instagram I spend casual days in Switzerland and on LinkedIn I’m a white collar professional who is driven, ambitious and highly qualified.

I think this split personality is a bad thing. We create these versions of ourselves of what we wish we were and what people want us to be. This creates all kinds of problems for us. The biggest problem I see is with our mental health, our online persona creates unrealistic expectations of our real life. We are in constant comparison to these social media stars who spend their days on far away beaches. We compare ourselves to our peers and how many followers they have, and how many likes they get. I see my peers whose online life looks perfect, but often times these same “perfect people” secretly struggle with mental health issues. We never properly deal with these issues, instead we post a picture from a happier time in our life and for a brief moment everything is okay, likes and comments from our friends give us a temporary dopamine high.

Photo: Marc Schäfer

The platform that this person used to post their photo loves this interaction. It’s exactly how they designed it, socially engineered to be addicting and take your time. We turn to these platforms every time we need a “hit”. The worst part about this deal is that we think that we are getting the drugs for free. These platforms compromise our security and our personal lives. They see EVERYTHING, they know where we accessed the platform, how long we stayed on the platform, what we looked at, how long we looked, who we looked at, etc… Then they manipulate our newsfeeds with ads and relative content, its a never ending cycle.

I’m not saying what we are doing here at Lens is going to solve all of these problems, but it’s a opportunity to shift your paradigm. Lens is a place where we want you to be truthful of who you are, we want you to interact with your peers and favorite companies … all under your control. We here at Lens want technology to make your life better, not worse… But the most important part, we want you to do it on your own terms! We will not stand between you and your interactions, that’s your information you should have control of it. Hopefully this control will give you a healthy state of mind.

Edited by Gavin Leach.

Build from a New Starting Point

Build from a New Starting Point

By Mark Chavez // Co-Founder & CEO

 

Growing up on a ranch gave me many opportunities to build new things which we needed to either solve a problem or plan for growth.  My dad was someone who could design and build anything he put his mind to. He taught me how to get my dreams down on paper, figure out the requirements, find the resources and put it all into action.  Before long, what was a dream became a reality.


There is much talk about disruption and transformation in the world of technology, but what really interests me is building from a new starting point.  The evolution of technology makes new beginnings not only possible but inevitable.


The current platform centric architecture of the cloud requires that I first connect to a platform to then be able to connect with you.  This is where all the exploitation and harvesting of our time and data takes place. I leave a trail of data at every point of interaction.  My data is scattered, harvested and monetized by others in a market that I am not even part of. In most cases it is then sold right back to me in the form of advertisements.

 

What if there was a solution where I could own all of my data, completely under my control, capability to decide who could have access to what information and under my terms and conditions?  Most importantly, instead of transfering my data to others, I could give subscribed access to it. This would be similar to Netflix where I can stream and watch content but not download it. I could cease leaving a trail of data and move from giving away my data to owning it.  


This is the new beginning that we are building and it all starts with what is referred to as a single source of truth.  In the world of Information Technology, establishing the single source of truth is a primary purpose. All businesses should have one truth, the master data, for such things as its employees, customers and physical assets.


I believe that we, as individuals, should also have one source of truth for our data.  A self storage container where all of our data, especially our master data such as name, address, resume, photos, videos, healthcare, social and financial resides under our own individual ownership and control. If someone requires access to our data, we don’t fill in their forms or send them the information, instead we give them a subscription and policy to access only what they need.  


This means that when I change my address for example, I only need to change it in one place.  Every document, application, service and platform that has subscribed access to my truth would then observe the change.  As important, I would clearly know who has access to what information and could revoke or change that access at any time.  


Establishing the single source of truth for the individual is our starting point.  Our product and our business model is individual data autonomy. This is a clear departure from the cloud / mobile architecture and business model which always tries to take your data and sell it in a market that you are not even in.


This solution is possible because of the new tool we are developing called a Lens.  A Lens is a subscription and policy to access specific data that you own. You can give a Lens, revoke a Lens or change the access on a Lens at anytime.  


The best part is that we can remove the need for the platform, the man in the middle that does all the harvesting.  You will no longer need to use Facebook, Twitter or Linkedin to share the respective information with others. The single source of truth, ownership of your data and the Lens will give you freedom to interact privately and securely with others.  


Furthermore, if an enterprise does require your data, they will have to purchase your Lens and you decide what they have access to and under what terms and conditions.  Enterprises in the future will no longer own our data, they will have to subscribe to it. They never should have had it in the first place!

Data is the new oil, the new currency and today we have no choice but to give it away to others. We are building from a new starting point where the individual is at the center, in full ownership and in control. This has been a dream of mine for a quite a while and it is now becoming a reality!

Since discontecting from all platforms such as Facebook, Twitter, and LinkedIn my only online presence is through my own public Lens found here Mylens.io/markchavez.

Making IoT Simple and Fun

Making IoT Simple and Fun

By Cody W. Eilar // CTO & Cofounder

With so many cloud providers jumping on the bandwagon to provide their customers IoT services, you have to wonder, what’s in it for them? Well as it turns out, quite a bit of money (who would have thought). IoT is not a new topic, and in fact it’s quite a boring one for the most part. The only thing that is different about IoT devices than the computer you are running is the fact that an IoT device probably has more constrained resources than your computer, at least I hope that’s the case. So what are these cloud providers selling you? Well, they are selling you security, ease of operation of your IoT infrastructure, and reliability you don’t have to manage. Sounds pretty sweet right? Well, that is until you learn how to run your own IoT infrastructure with no more than a few machines and some great libraries developed by the javascript community. In this post, I’ll show you how you can make your own IoT infrastructure with just a few dozen lines of code.

So why is IoT boring? Well, it’s because its nothing new. We’ve had microprocessors and microcontrollers for decades and suddenly everyone thinks that the internet of things is “new”. It’s not. But what is new is the fact that so many IoT companies forgot the basics of network security when they implemented their IoT infrastructure. There are many companies that don’t even do basic encryption for communicating with their devices, this is simply unacceptable. With all the IoT hype going around and huge market opportunities emerging, companies worked as quickly as they possibly could to be the first one to market. This had the result of creating lots of devices with less than subpar security opening the market to companies like Amazon and Microsoft to help IoT companies manage their security infrastructure.

Ok, now it’s time for me to step off the soap box and start getting into the project! Here is a quick sketch of what our IoT architecture is going to look like:

Pretty dang simple! Both the device and the client use the broker  that is in some infrastructure somewhere on the internet (or on our home local network!). The broker uses Redis for backend management of storing messages . That way, in order to scale our broker, we can just add more machines to both our broker and our Redis deployment. So in order to implement this, we are going to use two libraries: Mosca and MQTT.js. Mosca is great because you can either build a module out of it, or you can deploy it standalone if you just need basic brokering. However, if you wanted to apply more logic like whitelisting IP addresses, or even adding an authorization layer for communication, you could easily do this here.


I want to add a brief note about MQTT. Why aren’t we using another protocol like HTTP? Well MQTT is designed for machine to machine interaction. The protocol is very lightweight and much easier to manage on devices that have limited resources like a Raspberry Pi or an Arduino.


Now let’s look at some code. The first bit of code we need is our broker. I’ve chosen to use the module for the broker instead of the stand alone CLI version so that I could have some additional flexibility.

 

var mosca = require('mosca')

var ascoltatore = {
 type: 'redis',
 redis: require('redis'),
 db: 12,
 port: 6379,
 return_buffers: true, // to handle binary payloads
 host: "localhost"
};

var moscaSettings = {
 port: 1883,
 backend: ascoltatore,
 persistence: {
   factory: mosca.persistence.Redis
 }
};

var server = new mosca.Server(moscaSettings);
server.on('ready', setup);

server.on('clientConnected', function(client) {
console.log('client connected', client.id);
});

// fired when a message is received
server.on('published', function(packet, client) {
 console.log('Published', packet.topic, packet.payload);
});

// fired when the mqtt server is ready
function setup() {
 console.log('Mosca server is up and running')
}

 

This code was essentially copied directly from the mosca documentation. For this demo, I am running everything on a single machine, so if you want to scale this out to some different infrastructure, all you would have to do is change where your redis is running. If you are using Docker, you can simply run this command:  docker run --name some-redis -p 6379:6379 -d redis:latest and you will have a Redis server up and running in no time! This code is pretty self explanatory, but in case you missed it, all we are doing is creating a Mosca server with a Redis backend. We then setup some basic events. Clearly our events aren’t doing anything meaningful, but simply printing out some information. This is helpful data in case you find any bugs in your code down the road. There are other events you can trigger on too, but this tutorial doesn’t cover those. Finally, to run this snippet after you have stood up your Redis server, you can run “node broker.js”. If everything worked well, you should now see the message “Mosca server is up and running”. Congratulations, you deployed your first broker MQTT broker.

Now it’s time to start looking at how we can build our client and our device code. Basically what we want to do is to send our device commands from the client code and know that the command was issued correctly. We could also have our device update us with status occasionally if we are monitoring something like water temperature or moisture. But for now, let’s assume that our device only responds to what our client sends it. So let’s start by designing our device.


Using our the MQTT.js library, we only need to handle a few events: when we connect, when we’ve subscribed to a channel, and when we’ve received a message on that channel. We first create our client:


var mqtt = require('mqtt')
var client  = mqtt.connect('mqtt://localhost:1883')


This tells the client to connect to our localhost mosca broker that we started earlier. In the next step we need to have our device listen on a particular channel:


var deviceChannel ='someChannel'


client.on('connect', function () {
 client.subscribe(deviceChannel, function (err) {
   if(!err) console.log(`successfully subscribed to: ${deviceChannel}`)

 })
})


This bit of code simply tells the client that we are listening for any incoming messages on the “deviceChannel” channel. This isn’t too interesting yet, but when we start sending messages to this channel we, need to be able to handle them. This is where the following code comes into play:





client.on('message', function (topic, message) {


 const payload = R.compose(JSON.parse, R.map(x => x.toString()))(message)
 console.log(`Received ${JSON.stringify(payload)} from a client`)

 console.log(`Received a message: ${payload} from a client`)
 const messageToSend = `Sucessfully received your message ${payload}!`
 client.publish(payload.reply, JSON.stringify(messageToSend))
})


So there are a couple of things going on here. As soon as we receive a message, we are able to filter on the topic. That way, if our device is subscribed to multiple channels, we could handle calls to those channels differently. I.e. you may create a channel called ‘deviceChannel/waterTemp’ or ‘deviceChannel/humidity’. Also notice that in order to send a message back to the client, we need to know how to send the message back to the client. So ideally, the client would be subscribed to whatever is in the variable “payload.reply” so it could handle a response from the device itself.


All of this has been great, but we are missing three very important ideas. Can you guess what they are? That’s right, authenticity, confidentiality and integrity. Right now, we have none of that! Anyone could be listening to our network traffic and see our communication, and they could also send messages to our device too! If we decided to deploy this to the web now, we’d be in a lot of trouble. So how could we protect ourselves from attackers trying to control our device or deceive us? TweetNacl.js to the rescue! Tweetnacl is a minimal set of cryptographic utilities that can help us encrypt  our data to ensure confidentiality and we can also use it to ensure integrity and authenticity. Why this library instead of using libsodium? Well, this library is well documented, highly used in the JS community (600+ stars as of this writing), has been audited, and has only the features I need (libsodium has way more features!). All of this was important to me for making this decision. From this library we are going to use two functions. The first is called “nacl.box.keyPair()”. This function will create a public and private keys. The second one is “nacl.box()”. The box analogy is great if you have never heard it. Basically, you give out an open box with an open lock that you only have the keys to to whomever wants it. They can then put their message in the box, close it, and lock it with your lock. Now, no one can open it, not even the sender of the message, except for you. So with box, we’ve given ourselves confidentiality (no one can open our box except for the one who has the key) and we have also given ourselves integrity because any alterations in the message would result in the function “nacl.box.open” to fail. But what about authenticity? Well, we are going to cheat on that one a bit by pre-programming the public key of the device on the client and the public key of the client on the device. So in that regard, only the device can communicate with the client, and only the client can communicate with the device. If this seems a bit confusing, let’s reimplement our device code with tweetnacl:


var mqtt = require('mqtt')
var encrypt = require('./identity.js').encrypt
var decrypt = require('./identity.js').decrypt
var client  = mqtt.connect('mqtt://34.214.175.113:1883')
var R = require('ramda')

const deviceIdentity = {
 secretKey:"694e49655547484b6131534a744b677035487a6b625771654a74474e746e35624a4665376f6e44726f2f413d",
 publicKey:"653172766e53504738505a51625a7646453337355345394d4367644b54646e486a6733313845444c7858413d"
}

// White list of allowed clients
const clientPublicKey = "526d4b73516b614357533843344f7a342b51785a6847365867666c4e494830514e4c5334377430493856453d"


client.on('connect', function () {
 client.subscribe(deviceIdentity.publicKey, function (err) {
   console.log(`err = ${err}`)
   if(!err) console.log(`successfully subscribed to: ${deviceIdentity.publicKey}`)

 })
})
client.on('message', function (topic, message) {
 const payload = R.compose(JSON.parse, R.map(x => x.toString()))(message)
 console.log(`Received ${JSON.stringify(payload)} from a client`)
 if (payload.publicKey !== clientPublicKey){
   console.log('Unauthorized client attempting to access device')
 }
 // if message is not properly signed, this will throw. i.e. even if an attacker
 // knows that the public key is allowed, they will fail at this step unless
 // they have somehow managed to also get the secret key from the client
 const unencrypted = decrypt(payload, deviceIdentity.secretKey)
 console.log(`Received a message: ${unencrypted} from a client`)
 const messageToSend = `Sucessfully received your message ${unencrypted}!`
 const reply = encrypt(messageToSend, deviceIdentity, payload.publicKey)

 client.publish(payload.publicKey, JSON.stringify(reply))
})




I haven’t been completely honest with this particular refactor. I actually wrap tweetnacl to hide some of the complexity from you. Basically now, I use the public keys as a way to communicate with the device and also the client. You can see where the client connects, it connects using “deviceIdentity.publicKey”. The second thing to note here is that I’ve stored the “clientPublicKey” on the device code. As a mentioned earlier, this is to help with authenticity of messages from the client. If the message doesn’t come from a public Key that we trust, we either reject the message, or we will fail when attempting to decrypt the contents of the box.


Next, we are finally ready to write the client code:


var mqtt = require('mqtt')
var readline = require('readline')
var client  = mqtt.connect('mqtt://localhost:1883');
var encrypt = require('./identity.js').encrypt
var decrypt = require('./identity.js').decrypt

nacl.util = require('tweetnacl-util');

var R = require('ramda')


const clientIdentity = {
 secretKey:"6e3856435a3374507a72317236616f4f6234614c74686d54346c654e4d7150714b7a33564473475a4d626b3d",
 publicKey:"526d4b73516b614357533843344f7a342b51785a6847365867666c4e494830514e4c5334377430493856453d"
}

// list of device client can talk to
const devicePublicKey = "653172766e53504738505a51625a7646453337355345394d4367644b54646e486a6733313845444c7858413d"

var rl = readline.createInterface({
 input: process.stdin,
 output: process.stdout
});

let sendingTime = new Date().getTime()
let receivingTime = new Date().getTime()

client.on('connect', function () {
 client.subscribe(`${clientIdentity.publicKey}`, function (err) {
   if(!err) console.log(`Client listening on ${clientIdentity.publicKey}`)
   rl.on('line', (input) => {
       console.log(`sending ${input} to device`)
       const message = encrypt(input, clientIdentity, devicePublicKey)
       // Send messages to the device via it's public key
       console.log(`sending message = ${JSON.stringify(message)}`)
       client.publish(devicePublicKey, JSON.stringify(message))
       sendingTime = new Date().getTime()
   })
 })
})


client.on('message', function (topic, message) {

 console.log(`Received a message on the topic: ${topic}`)
 // message is Buffer
 const payload = R.compose(JSON.parse, R.map(x => x.toString()))(message)
 console.log(`Received payload ${JSON.stringify(payload)} from device`)
 if (payload.publicKey !== devicePublicKey){
    console.log('Unauthorized message received.');
    return
  }
  // if box is not properly signed, this will throw
 const unencrypted = decrypt(payload, clientIdentity.secretKey)
 receivingTime = new Date().getTime()
 console.log(`Device sent me a message!: ${unencrypted}`)
 console.log(`It took ${receivingTime - sendingTime} ms\n`)
})

If you notice, this code is almost exactly the same as the device code other than one little thing, and that is the readline event. I included this so you could send little messages to the device by typing a message and then hitting enter. Again, the same concepts are followed. We have our public and private keys that are associated with the client and also jot down the public key of our device. Another thing to notice is that I have encoded the public keys as hex, because you cannot create subscriptions using base64 strings with the MQTT library.


Now to put it all together, all you have to do is run the broker “node broker.js” , the client “node client.js” and finally the device, “node device.js”. Congratulations, you now have an end to end encrypted IoT service using an MQTT message broker and TweetNacl! To find the complete source code, you can find it on github here: https://github.com/AcidLeroy/mqtt-lab