-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error with handshake process #295
Comments
What particle device are you using? |
It's an electron, and yes! I did the step to change the profile ( I had to change the command slightly to work tough: particle keys server default_key.pub.pem --host IP_ADDRESS --port 5683 --protocol tcp and also particle keys protocol --protocol tcp |
By the way, I'm using the latest particle CLI version (2.3.0) |
There are two things I can think of
|
I did the whole process few times, but I can check again! |
Ok, I think I just need to use the original version of this - usrz/ec-key@23f4dd3 I'm guessing the dependencies are just out of date now. |
I've changed Is there anything I'm missing? |
I updated the dependency. You'll need to pull the latest version of this repo and |
I've seen that spark-protocol repo was updated, but not this. Shouldn't this package.json be updated with the new ec-key ^0.0.4 too? I can't update properly. It keeps downloading the spark-protocol without your last changes |
Doh -- I didn't push. It's updated now. |
Tried the with the updated version but still fails with the same error. Also I've moved to node This is the "updated" error log:
|
The only thing I can think of is that the key is improperly formatted. Can you run a test script with |
There the test with the two keys. The one in deviceKeys.db (also matches the generated XXXXX_rsa_new.pub.pem) and the one received from the device during handshake. const ECKey = require('ec-key');
device_db_pem = "-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEaUXZhpYnCARKEx1FGFrOgu8Dgyb5\n//Drwxg2oR8LkP37MDj7ESmj78PdBqD6PeNmvBMQg5Z7NQ8saRDxX1h50g==\n-----END PUBLIC KEY-----\n"
device_sent_pem = "-----BEGIN PUBLIC KEY-----\nMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQBThDnRSoUGN20VitFNf4Kj1wWh\nIdiJvFJ4CPj2oAoGCCqGSM49AwEHoUQDQgAEaUXZhpYnCARKEx1FGFrOgu8Dgyb5\n//Drwxg2oR8LkP37MDj7ESmj78PdBqD6PeNmvBMQg5Z7NQ8saRDxX1h50v//////\n/////////////////wIDAQAB\n-----END PUBLIC KEY-----\n"
var key = new ECKey(device_db_pem, 'pem');
console.log(key)
var key2 = new ECKey(device_sent_pem, 'pem');
console.log(key2)
/* The second throws the error:
D:\spark-server>node test.js
D:\spark-server\node_modules\asn1.js\lib\asn1\base\reporter.js:84
throw err;
^
ReporterError: Failed to match tag: "objid" at: ["algorithmIdentifier"]["parameters"]
at DecoderBuffer.error (D:\spark-server\node_modules\asn1.js\lib\asn1\base\reporter.js:78:11)
at DERNode.decodeTag [as _decodeTag] (D:\spark-server\node_modules\asn1.js\lib\asn1\decoders\der.js:71:19)
at DERNode.decode [as _decode] (D:\spark-server\node_modules\asn1.js\lib\asn1\base\node.js:341:25)
at decodeChildren (D:\spark-server\node_modules\asn1.js\lib\asn1\base\node.js:378:15)
at Array.forEach (<anonymous>)
at DERNode.decode [as _decode] (D:\spark-server\node_modules\asn1.js\lib\asn1\base\node.js:375:22)
at decodeChildren (D:\spark-server\node_modules\asn1.js\lib\asn1\base\node.js:378:15)
at Array.forEach (<anonymous>)
at DERNode.decode [as _decode] (D:\spark-server\node_modules\asn1.js\lib\asn1\base\node.js:375:22)
at DERNode.decode [as _decode] (D:\spark-server\node_modules\asn1.js\lib\asn1\base\node.js:280:47) {
path: '["algorithmIdentifier"]["parameters"]'
}
*/ |
Ok -- that's super helpful. It's definitely a different format. I'm wondering if it's just an RSA key instead of an EC key. This makes me think that I don't need the ec-key parsing code since I'm not using UDP. |
I'm not completely sure of how is this issue related because I always use --protocol tcp in all particlecli commands. Also, that issue is closed, so not sure how they finally implemented it. |
What I'm saying is that I implemented Electron support before they fixed that bug. In the past the Electron always sent ecc keys for UDP and TCP. Now that it's fixed, I can just use the RSA key code. I'll try to find my electron and see if I can get this fixed tonight. |
Let me know if this works -- I couldn't find my electron so I am just guessing at a fix. |
It now seems to compare the keys properly but still the keys doesn't match: 07:36:51.287Z INFO DeviceServer.js: New Connection
07:36:52.332Z ERROR Handshake.js: Handshake failed (cache_key=_31, deviceID=XXXXXXXXXXXXXXXXXXXXXXXX, ip=::ffff:XX.XX.XX.XX)
Error: key passed to device during handshake doesn'tmatch saved public key: XXXXXXXXXXXXXXXXXXXXXXXX
at Handshake._callee5$ (D:\spark-server\node_modules\spark-protocol\dist\lib\Handshake.js:388:21)
at tryCatch (D:\spark-server\node_modules\regenerator-runtime\runtime.js:62:40)
at Generator.invoke [as _invoke] (D:\spark-server\node_modules\regenerator-runtime\runtime.js:296:22)
at Generator.next (D:\spark-server\node_modules\regenerator-runtime\runtime.js:114:21)
at step (D:\spark-server\node_modules\babel-runtime\helpers\asyncToGenerator.js:17:30)
at D:\spark-server\node_modules\babel-runtime\helpers\asyncToGenerator.js:28:13
07:36:52.333Z INFO Device.js: Device disconnected (cache_key=_31, deviceID="", disconnectCounter=1)
07:36:52.334Z ERROR DeviceServer.js: Device startup failed (deviceID=null)
Error: key passed to device during handshake doesn'tmatch saved public key: XXXXXXXXXXXXXXXXXXXXXXXX
at Handshake._callee5$ (D:\spark-server\node_modules\spark-protocol\dist\lib\Handshake.js:388:21)
at tryCatch (D:\spark-server\node_modules\regenerator-runtime\runtime.js:62:40)
at Generator.invoke [as _invoke] (D:\spark-server\node_modules\regenerator-runtime\runtime.js:296:22)
at Generator.next (D:\spark-server\node_modules\regenerator-runtime\runtime.js:114:21)
at step (D:\spark-server\node_modules\babel-runtime\helpers\asyncToGenerator.js:17:30)
at D:\spark-server\node_modules\babel-runtime\helpers\asyncToGenerator.js:28:13 I've did the complete process with the updated code to re-generate the device/database keys and be sure that it's all set properly.
|
Do you have something other than an electron to test with? I just want to make sure that it's a server issue and not with the setup process. |
Yes! I've tried with Photon and it works! But still not working with Electron. I also checked a Boron but it doesn't even connect, so might be an error of UDP/TCP. |
So the |
All devices have ecc key in the database (except photon). I'm pretty sure I've deleted it before the last update. |
I've did it again cleaning the database but it still shows Database: Keys generated in the computer: https://we.tl/t-C9NqPlanxq Commands used:
|
It looks like you're doing everything correctly. I'll need to find my electron to debug this. |
I got my Electron set up this morning but did not hook it back up to a network so I just tested and verified that it would send an RSA key instead of an ECC key. Try this:
I'm going to open an issue with Particle CLI but I doubt they will fix it on their end. |
I executed your commands and it connected properly! RSA in the database and everything seems OK about the keys. But there's a problem during the protocol initialization:
I've done the process from the beginning but always happen the same. I tried DeviceOS 0.7.0 and 1.2.1. |
Well, that's progress! For some reason it seems like the |
Yes! Thank you for all your support! BTW: Which version do you use for the ParticleCLI / DeviceOS? It seems weird that all this errors are happening only to me, :( |
I upgraded to the latest. The reason it's happening to you is that you're using an Electron. Nobody really uses that and hosts their own cloud. |
So I connected my Electron to my server and don't see the |
Maybe it's related to 1.2.1 version? Not sure about the |
Could be. My Electron is the latest. Can you try flashing your Electron with |
* Added npm functions for example build and run, changed eventProvider sample * Changed sample * changed ignore on npm build:examples * patched sample to working code * add prettier, add gitAttributes to hide dist in commit diffs * Changed Infos in package.json * Added Bunyan Logger * added last NIT from last PR * Added Changes from jlkalberer * Added Changes AntonPuko * Added WebHookLogger Log Line * remove gitAdd from npm scripts, its lint-staged own command * update package-lock.json, closes #229 * remove wrong logger.warn in webhookManager * remove wrong logger.warn in webhookManager * update package-lock.json, closes #234 * update migrateScript, so it saves dates correctly. * passing not json event.data as sting for webhooks as form-ulr-encoded * update build * stringify device variables * try/catch around device attributes as JSON deserializer was breaking on existing devices. * Update to newer version of spark protocol * build deviceAttributesDatabaseRepository.js * Starting work on products API. Still need to add in device endpoint and update the firmware manager so it will auto-update devices. * fix adminUser creation * Finished API for managing firmware/devices/products. Now we need to actually add code for flashing the correct firmware when the device connects. * Added changes so that when a product firmware is updated, it sends an event to the device server to start updating all connected devices. * Rebuilt dist * Fixed DB so it fetches correctly for mongo. * fix variables loss on update device name. * remove console.logs; add webhookLoggerStub * add bunyan to start scripts * add DeviceManager to ProductsController bindings * provide webhookData as querystring for GET requests * update package-lock.json to get last spark-protocol changes. * remove comments * add deviceManager assignment in productsController, closes #245 * update package-lock.json for spark-protocol change: Brewskey/spark-protocol@3472b1a closes #244 * update package-lock.json for spark-protocol change: Brewskey/spark-protocol@b5ede6d * update package-lock.json for spark-protocol change: Brewskey/spark-protocol@2e1dde3 * update package-lock.json for spark-protocol change: Brewskey/spark-protocol@6e83a46 * product api fixes * revert back casting product_id to string on production creation, use helper function for getting numeric productID from IDOrSlug * add equals to 0 check for right platform_id filtering. * fix _formatProduct in ProductDatabaseRepository * add parseInt for firmware version comes from route params. * formatDeviceAttributes, add product_id to returned product devices props * build files * use isNaN in getByIDOrSlug * fix flashProductDevices method, fix csv parsing * save product_id instead product.id in productDevice * cast productFirmware buffer when save to/ get from db. * update flow-bin, fix some product related flow errors, there are still some * fix bugs, add temporary hack for getting right buffer from different dbs * add devicePing endpoint * disable userFilter for admin in eventsController * return keep alive for event stream * addproductDevices small fixes, rebuild files * Fixed bindings...? * Updated products controller to flash when devices are added to an existing product. Fixed some flow errors. * Always flash new devices when they are added to a product. * Not sure why these changes weren't checked in * bump package-lock.json * bump package-lock.json again with rebuilt spark-protocol * bump spark-protocol verison in package-lock for that change Brewskey/spark-protocol@566fb0a * Updated README FirmwareCompilationManager doesn't depend on external files. Flow fixes. * update package-lock.json for that change Brewskey/spark-protocol@3f2bf42, fix #265 * bump spark-protocol version * set current === false for existing released firmware on update/addnew firmware * remove missing api features from readme * fix current is undefined * Fixed updates on product firmware Fixes #266 * Rebuild * bump package-lock.json * fix some products endpoint payloads under v2 routes * remove page and pageSize from db.find() * fix default skip/take value in mongodb * Updated ec-key Now using yarn * Update yarn.lock * and again.. * ............ * This should be the one * Update yarn lock * parse skip and take to integer for mongo * case insensitive search for getByIDs, should close #279 * add paging for v2 getFirmares, add countFirmwares, fix find() with take = 0 return all entities * remove test endpoint * remove default take value, better skip/take checks * move firmwares endpoints to its own controllers * revert case sensitive search for getManyFromIDs for now * Fixed issue with adding products. If the device had never connected to the cloud, the platformID wasn't set on DeviceAttributes. * Update spark-protocol version * Update yarn.lock * ... * add getFirmware route * fix event stream format under eventsControllerV2 * Fixes to productdevices query Always `toLowerCase` when working with particle device IDs. There are cases where we query/save without doing this so I'll write a migration script later. * Fix ProductDevices return value so it always returns `id`. If attributes are null, it doesn't return the value. * remove deviceAttributes check on delete productDevice * Update firmware queries so device_count queries the database instead of storing the value. * Fix bindings. The ProductFirmwareController didn't work because it wasn't getting all the parameters injected. * The DeviceAttributeDatabaseRepository was patching instead of updating. We need to get all the existing data in order to know the device product firmware version. * Trying to get the firmware version into productdevices... * doh.. forgot to use `await` * Return `notes` and other fields from ProductDevices API * You should be able to toggle quarantined devices even if they haven't connected yet. * send /n on sse connection start * Updated yarn dependencies * Updated spark-protocol version * Updated spark-server dependency. * Updated dependencies * Fixing logging in WebhookManager * Updating logging * Updating logging to be more consistent and include deviceID if possible * Fix WebhookManager.test.js * Fixing exception in website manager * More logging fixes * More webhook logging for errors. * Add additional SSE header * Update protocol version * Update spark-protocol that supports "@" in binary filenames. * Update events controller to return the proper format :/ * ... * Update SSE to return UTF-8 encoding header. * Removing changes to event stream. We want the default message type to be used so we only have to subscribe to one thing. * revert keepalive. * There was an unimplemented function on this repository.. this will fix firmware updates. * Fixes #290 * Add support for name or device ID on endpoints for #293 * One more stab at #293 * Fixing flow types * Upgraded spark dependency * Update README.md * Update README.md * Upgraded yarn.lock * Upgrade spark-protocol **Make sure to run `yarn upgrade-firmware` in order to get OTA update fixes** * Upgraded dependencies to get new fix for flashing larger binaries * Updated ec-key and rebuilt * Hopefully fixes #295 * Update README.md * Update README.md * Upgrade spark-protocol * Fixed all flow types Server seems to be running but I'll need to test * Fixing scripts for npm * Updated to latest spark-protocol * Updated dependencies * Update reference to spark-protocol. Fixes crypto errors. * Updated CORS to run for all requests * Adding index generation * Updated server to use fixed spark-protocol Added support for a settings.json file to override settings. * Rebuild server Updated link in readme * Fixing #298 * Fixed events controller crashing the whole server and added some logging. * build server * Upgrade dependencies * Remove extra logging * Upgrade spark-protocol for connection throttling. * Upgrade spark protocol for OTA update fixes. * Update spark-protocol * Update spark-protocol * Upgrade spark-protocol * update spark-protocol * upgrade spark-protocol * Fix coap functions * Update spark-protocol * Update spark protocol * Update spark-protocol * update spark-protocol * Fixed binding for config Updated spark-protocol * Update packages * Remove postinstall step * Update dependencies * Update mongodb version * Fix mongo collection call * Get db from client * Fix database * Updated spark-protocol * update deps * update * bump * bump * bump * bump * bump * bump * bump * bump * bump * bump * bump * bump * bump * Migrated everything to typescript * Add improved logging * Redacted token from authorization header. * Improve types * bump * bump * Fix mkdirp * Delete generated files * Migrate to lerna + workspaces * Working on publish workflow * Trying to fix npm ci in GH action * fix commands * Remove remote e2e tests * Make private false * public access * only publish dist folder * Publish spark-protocol too * Bump version * fix publish.yml * fixing exports * Remove crc check when updating firmware * Bump packages * Move particle-collider * Fix webhooks Bump modules for publish --------- Co-authored-by: Andreas <[email protected]> Co-authored-by: Andreas Häferer <[email protected]> Co-authored-by: Anton puko <[email protected]> Co-authored-by: Anton Puko <[email protected]> Co-authored-by: AntonPuko <[email protected]>
Hi,
I've just tried the steps to build but I'm having troubles during the authentication process.
The error comes when the particle device tries to connect to the cloud but if I look to deviceKeys.deb, the data for that device is properly stored:
The error is when comparing
publicKey.equals(deviceProvidedPem)
at Handshake.js:383 from spark-protocol (I posted the issue here because I don't really know what is the root cause)The error log shows this:
I'm using node
v8.11.1
, npm5.6.0
and yarn1.22.4
, and tested the server on Ubuntu&Windows.Thank you
The text was updated successfully, but these errors were encountered: