Releases: SocketCluster/socketcluster
v9.3.3
v9.3.0
Added async/await support for the worker and broker controller run
methods. See #351
The createHTTPServer
method on the worker can also now resolve asynchronously.
Potentially breaking changes
- Removed support for the deprecated
global
property on the worker and server objects - You should now use theexchange
property instead. - Removed support for the deprecated
getSocketURL()
method of the SocketCluster master instance.
v9.1.1
This version fixes an issue with the scServer.clients
object and scServer.clientsCount
value not being updated correctly under certain specific scenarios. See SocketCluster/socketcluster-server#21
In addition to fixing the issue, this version introduces a few changes.
Potentially breaking changes
- On the server socket, the
disconnect
event used to trigger whenever the socket lost the connection at any stage of the handshake/connection cycle. That meant that thedisconnect
event could get triggered on the socket before theconnection
event had been triggered on thescServer
object (which was strange). Also there was no way to know the specific phase of the cycle when the connection ended (before or after the handshake). Now thedisconnect
only triggers after the handshake has completed. To catch a lost connection during the handshake phase, you should use theconnectAbort
event on the socket orconnectionAbort
event on the server. If you need a catch-all to capture any kind of connection termination at any phase of the cycle, you should now use the socket's newclose
event or the server's newclosure
event. Note that this change does not affect thedisconnection
event on the server; only thedisconnect
event on the socket. Also note that this change should only affect you if you have written logic inside ascServer.on('handshake', handler)
handler - If you just use thescServer.on('connection', handler)
handler (and thus don't try to access sockets before they are fully connected), then the behaviour ofdisconnect
will not have changed from what it was before.
Non-breaking changes
- Added a new
scServer.pendingClients
property which is a hashmap similar toscServer.clients
except that it contains sockets which have not yet completed the handshake (and therefore are still in the 'connecting' state). Also, a matchingscServer.pendingClientsCount
property was added. - The
connectAbort
event has been present on client sockets for a long time but not on server sockets until now. A newconnectAbort
event was added to the socket and a matchingconnectionAbort
event was added on thescServer
object. - As mentioned in the 'potentially breaking changes' section above, a new catch-all
close
event has been added to the socket to replace thedisconnect
event which had ambiguous meaning. You can also listen to lost socket connections from the scServer using theclosure
event.
v9.0.3 - Maurits
See #333
Special thanks to @mauritslamers for coming up with this proposal and for doing the initial work for this release.
This release improves the way SC bootstraps different processes to give more control to the developer.
As a result of this update, the boilerplate logic for entry points to various processes within SC has changed.
The entry point for the worker controller (worker.js) used to be:
// SocketCluster decides when the run() function is invoked.
module.exports.run = function (worker) {
// Custom worker logic goes here.
worker.scServer.on('connection', (socket) => {
// Handle the new connection.
})
};
but now it is:
var SCWorker = require('socketcluster/scworker');
class Worker extends SCWorker {
run() {
// Custom worker logic goes here.
// You can reference to the worker in here with 'this'.
this.scServer.on('connection', (socket) => {
// Handle the new connection.
});
}
}
new Worker();
Note that you can use any of the approaches mentioned in issue #333 but the class-based approach is the default recommended approach (unless you need to support really old versions of Node.js).
The new approach was inspired from Java's Runnable
interface for threads (except in the case of SC, we have specialized processes instead of threads) see https://docs.oracle.com/javase/tutorial/essential/concurrency/runthread.html.
As part of this change, you can now override the SCWorker's createHTTPServer()
method to return your own custom HTTP server for SC to use (it needs to be compatible with the default Node.js HTTP server though). This should make it easier to use other back end frameworks with SC.
For more details on the new worker boilerplate see https://socketcluster.io/#!/docs/api-scworker
For more details on the new broker boilerplate see https://socketcluster.io/#!/docs/api-broker
Breaking changes
exchange.run(...)
should be renamed toexchange.exec(...)
throughout your code (run
now has a special meaning related to the various process controllers).- When requiring the master SocketCluster object (e.g. in server.js), it should now be
const SocketCluster = require('socketcluster');
instead ofconst SocketCluster = require('socketcluster').SocketCluster;
- New entry point boilerplate for
worker.js
https://github.com/SocketCluster/socketcluster/blob/de9757cdb078805bfa0f808687b5e6a8e2ef5c20/sample/worker.js - New entry point boilerplate for
broker.js
https://github.com/SocketCluster/socketcluster/blob/de9757cdb078805bfa0f808687b5e6a8e2ef5c20/sample/broker.js - New entry point boilerplate for
workerCluster
process; it's similar to the newworker.js
andbroker.js
format but you need to import theSCWorkerCluster
base class usingvar SCWorkerCluster = require('./scworkercluster');
- No more
initController
- Since the developer now has full control over the instantiation of various process controllers, it became pretty useless and didn't fit into the new model. - No more
httpServerModule
option - You can now simply override the SCWorker'screateHTTPServer()
method to achieve the same thing. - For Docker/Kubernetes SCC there is no more
master.js
file, now you can just provide your ownserver.js
into the volume container to be your master process https://github.com/SocketCluster/socketcluster/blob/de9757cdb078805bfa0f808687b5e6a8e2ef5c20/sample/server.js
v8.0.2
No breaking API changes from 7.x.x but it makes an addition to the SC protocol which may affect codecs and other SC plugins which interact directly with the SC protocol.
- Added subscription batching. This allows you to batch channel subscriptions together to reduce the number of WebSocket frames required to subscribe to multiple channels. This should result in a significant performance improvement when subscribing to a large number of unique channels in a short period of time - It should also help to improve the performance of re-connections (automatic re-subscriptions) when clients handle a large number of channels. Note that channels are non-batching by default - You need to set the
batch
option totrue
when subscribing to a new channel in order to allow a channel to be batched. See thesubscribe
method here: https://socketcluster.io/#!/docs/api-scsocket-client
v.7.0.2
Note that v7.0.x shouldn't have any major breaking changes from 6.8.0.
- Updated SocketCluster version number to match the new major client version number 7.x.x.
- Fixed issue with
socket.setAuthToken(data, options)
expiresIn
option being ignored; also improved error handling related to thesocket.setAuthToken
function.
v6.8.0
- Removed all remaining traces of domains in SC.
- Removed custom sc-emitter in favor of component-emitter for the client and server SCSocket objects.
Possible breaking change:
- SC no longer uses domains internally to capture errors (Node.js has deprecated them); instead, SC now listens to 'error' events directly on the server and sockets which is creates. This means that you should never use the
removeAllListeners('error')
on objects created by SC as this will destroy SC's internal error handling. Note that the Node.js documentation explicitly recommends against removing listeners in this way. See https://nodejs.org/api/events.html#events_emitter_removealllisteners_eventname
v6.7.0
- Refactored the code, especially in and around the
sc-broker
module. - Improved visibility of broker errors (better stack traces).
- Improved visibility of errors throughout SC in general.
- Brokers recover from crashes faster.
- Broker actions and messages now have improved buffering so broker crashes/recovery should be more seamless.
- Improved buffering for all IPC messages: sendToBroker, sendToWorker and sendToMaster...
- Improved tests around brokers.
v6.6.0
- Added the ability for processes to easily respond to messages from other processes over IPC via callbacks. A new callback argument was added to sendToWorker, sendToBroker and sendToMaster methods throughout SC (see updated API docs on https://socketcluster.io website).
- Added buffering to sendToWorker function on master - Previously, if the workerCluster wasn't ready yet or was down, invoking
socketCluster.sendToWorker(...)
would trigger an error - Now the messages will be buffered and delivered as soon as the workerCluster is ready.