当前位置:网站首页>[Shanda conference] definitions of some basic tools

[Shanda conference] definitions of some basic tools

2022-06-22 16:04:00 What does Xiao Li Mao eat today

preface

If you want to achieve Yamada conference Various functions of the client , And improve development efficiency , I need to design some tool classes for other components during development , And some reusable code is extracted and encapsulated .

HTTP Request tool class

This is the one I use to send Http Tool classes for asynchronous requests , I introduced... Into the project axios modular , And according to axios Modules are packaged , Got its own tool class for sending network requests .
The code of the entire tool class is as follows :

// Axios.ts
import axios, {
     AxiosInstance, AxiosRequestHeaders } from 'axios';
import store from 'Utils/Store/store';

const instance = axios.create({
    
	baseURL: 'http://meeting.aiolia.top:8080/',
});
// const wsInstance = axios.create({
    
// baseURL: 'http://meeting.aiolia.top:8080/chat/',
// });
instance.defaults.headers.post['Content-Type'] = 'application/x-www-form-urlencoded';
// wsInstance.defaults.headers.post['Content-Type'] = 'application/x-www-form-urlencoded';

store.subscribe(() => {
    
	const token = store.getState().authToken;
	instance.defaults.headers.common['Authorization'] = token;
	// wsInstance.defaults.headers.common['Authorization'] = token;
});

function convertParamsToData(param: object) {
    
	const paramArr = [];
	for (const key in param) {
    
		if (Object.hasOwnProperty.call(param, key)) {
    
			const value = param[key as keyof typeof param];
			paramArr.push(`${
      encodeURI(key)}=${
      encodeURI(value)}`);
		}
	}
	return paramArr.length > 0 ? paramArr.join('&') : '';
}

/** *  according to  AxiosInstance  Instance generation  Ajax  object  * @param {AxiosInstance} instance AxiosInstance  example  */
class Ajax {
    
	instance: AxiosInstance;

	constructor(instance: AxiosInstance) {
    
		this.instance = instance;
	}

	post(url: string, params?: object, headers?: AxiosRequestHeaders): Promise<any> {
    
		return new Promise((resolve, reject) => {
    
			this.instance({
    
				method: 'post',
				url,
				data: params ? convertParamsToData(params) : '',
				headers,
			})
				.then((response) => {
    
					resolve(response.data);
				})
				.catch((error) => {
    
					reject({
    
						error,
						ajax: true,
					});
				});
		});
	}

	file(url: string, params: object, headers = {
    }): Promise<any> {
    
		const param = new FormData();
		for (const key in params) {
    
			if (Object.hasOwnProperty.call(params, key)) {
    
				param.append(key, params[key as keyof typeof params]);
			}
		}
		return new Promise((resolve, reject) => {
    
			this.instance({
    
				method: 'post',
				url,
				data: param,
				headers: Object.assign(headers, {
     'Content-Type': 'multipart/form-data' }),
			})
				.then((response) => {
    
					resolve(response.data);
				})
				.catch((error) => {
    
					reject({
    
						error,
						ajax: true,
					});
				});
		});
	}

	get(url: string, params?: object, headers?: AxiosRequestHeaders): Promise<any> {
    
		return new Promise((resolve, reject) => {
    
			this.instance({
    
				method: 'GET',
				url,
				headers,
				params,
			})
				.then((response) => {
    
					resolve(response.data);
				})
				.catch((error) => {
    
					reject({
    
						error,
						ajax: true,
					});
				});
		});
	}
}

const ajax = new Ajax(instance);
// const wsAjax = new Ajax(wsInstance);

export default ajax;

Event bus

because React Advocate one-way data flow , It is difficult for us to achieve something like this without some special writing Vue Such two-way binding .Redux It's a solution , However Redux Part of the writing is too cumbersome , If it is a very simple operation, there is no need to go through Redux Realization . therefore , I added one for the project Event bus Tool class of , It uses Release - subscribe Design according to the pattern of , The overall code is as follows :

interface EventBusFunction {
    
	func: Function;
	once: boolean;
}

class EventBus {
    
	events: {
    
		[key: string]: EventBusFunction[];
	};
	handlers: {
    
		[key: string]: Function;
	};

	constructor() {
    
		this.events = {
    };
		this.handlers = {
    };
	}

	/** *  Add listeners to the event bus  * @param {string} type  Added event handle  * @param {Function} func  Function executed after the event is triggered  */
	on(type: string, func: Function) {
    
		if (!this.events[type]) this.events[type] = [];
		this.events[type].push({
    
			func,
			once: false,
		});
	}

	/** *  Add listeners to the event bus , Monitor only once  * @param {string} type  Added event handle  * @param {Function} func  Function executed after the event is triggered  */
	once(type: string, func: Function) {
    
		if (!this.events[type]) this.events[type] = [];
		this.events[type].push({
    
			func,
			once: true,
		});
	}

	/** *  Triggering event  * @param {string} type  The type of event to trigger  * @param {...any} args  Incoming parameter  */
	emit(type: string, ...args: any[]) {
    
		if (this.events[type]) {
    
			const cbs = this.events[type];
			const newCbs = new Array();
			while (cbs.length > 0) {
    
				const cb = cbs.pop() as EventBusFunction;
				cb.func.apply(this, args);
				if (!cb.once) newCbs.push(cb);
			}
			if (newCbs.length === 0) delete this.events[type];
			else this.events[type] = newCbs;
		}
	}

	/** *  Remove the listener of a function from the event bus  * @param {string} type  Which event handle to remove the function from  * @param {Function} func  Functions to be removed  * @returns {boolean}  Whether the removal event has been executed  */
	off(type: string, func: Function): boolean {
    
		if (this.events && this.events[type]) {
    
			const cbs = this.events[type];
			let index = -1;
			for (const cb of cbs) {
    
				index++;
				if (cb.func === func) {
    
					cbs.splice(index, 1);
					if (cbs.length === 0) delete this.events[type];
					return true;
				}
			}
		}
		return false;
	}

	/** *  Remove all listeners for an event from the event bus  * @param {string} type  Event handle to be removed  */
	offAll(type: string) {
    
		if (this.events) {
    
			delete this.events[type];
		}
	}

	/** *  Add an event handle callback to the event bus  * @param {string} type  Event handle name  * @param {Function} cb  Callback function  */
	handle(type: string, cb: Function): {
     ok: boolean; err?: Error } {
    
		if (this.handlers[type]) {
    
			return {
    
				ok: false,
				err: new Error(`Handler '${
      type}' has already been registered.`),
			};
		} else {
    
			this.handlers[type] = cb;
			return {
    
				ok: true,
			};
		}
	}

	/** *  Asynchronous trigger event handle function  * @param {string} type  Event handle to trigger  * @param {Array} args  Parameters passed to the callback function  * @returns {Promise}  after  Promise  The execution result of the encapsulated callback function  */
	invoke(type: string, ...args: any[]) {
    
		const handler = this.handlers[type];
		if (handler) {
    
			return Promise.resolve(handler.apply(this, args));
		} else {
    
			throw Promise.resolve(new Error(`Handler '${
      type}' has not been registered yet.`));
		}
	}

	/** *  Synchronous trigger event handle function  * @param {string} type  Event handle to trigger  * @param {Array} args  Parameters passed to the callback function  * @returns  Callback function execution result  */
	invokeSync(type: string, ...args: any[]) {
    
		const handler = this.handlers[type];
		if (handler) {
    
			return handler.apply(this, args);
		} else {
    
			throw new Error(`Handler '${
      type}' has not been registered yet.`);
		}
	}

	/** *  Remove event handle listener  * @param {string} type  Event handle to remove  */
	removeHandler(type: string) {
    
		delete this.handlers[type];
	}
}

const eventBus = new EventBus();
export default eventBus;

Customize Hooks

In the last blog post , I introduced a in this project Customize Hook——useVolume, In addition to this Hook , I also defined another Hook, They were written together in MyHooks.ts The file of . except useVolume, Another customization Hook yes usePrevious, It is responsible for saving a variable in the previous React Clock value . The concrete implementation is also very simple and ingenious :

const {
     useRef, useEffect } = require('react');
/** * 【 Customize Hooks】 Keep the state of the data at the last time  * @param {any} value  Data to be retained  * @returns  The state of the data at the last moment  */
const usePrevious = (value: any): typeof value => {
    
	const ref = useRef();
	useEffect(() => {
    
		ref.current = value;
	});
	return ref.current;
};

be based on useEffect Characteristics of , Make the Hook Can go first return ref.current , And then I passed useEffect Executive side effects , to update ref.current Value , Thus, the value in the previous clock state is retained .

Message tone

In our project , I added an extra MSN function , Users can add other users as friends through the friend system , And instant messaging with them . In order to improve the user experience , I wrote such a tool class , To provide the user with a message prompt tone .

export const AUDIO_TYPE = {
    
	MESSAGE_RECEIVED: 'info',
	WEBRTC_CALLING: 'call',
	WEBRTC_ANSWERING: 'answer',
};

export const buildPropmt = function (audioType: string, loop = false) {
    
	const audioContext = new AudioContext();
	let source = audioContext.createBufferSource();
	const audio = require(`./audios/${
      audioType}.mp3`);
	const startAudioPropmt = () => {
    
		if (source.buffer) {
    
			source.stop();
			source = audioContext.createBufferSource();
		}
		fetch(audio.default)
			.then((res) => {
    
				return res.arrayBuffer();
			})
			.then((arrayBuffer) => {
    
				return audioContext.decodeAudioData(arrayBuffer, (decodeData) => {
    
					return decodeData;
				});
			})
			.then(async (audioBuffer) => {
    
				stopAudioPropmt();
				source.buffer = audioBuffer;
				source.loop = loop;
				source.connect(audioContext.destination);
			});
		source.start(0);
	};
	const stopAudioPropmt = () => {
    
		if (source.buffer) {
    
			source.stop();
			source = audioContext.createBufferSource();
		}
	};
	return [startAudioPropmt, stopAudioPropmt];
};

I use AudioContext Created AudioBufferSourceNode , And play the audio file of binary stream through it . For the convenience of other components , I encapsulate this function , And returns two functions , One for playing audio , The other is used to stop playback to prevent malignant events such as memory leakage bug .

Some definitions of constants

In the development process , I use a lot of constants , These constants are usually used by multiple components 、 Used by multiple modules . To reduce the amount of code , And improve maintainability , I extracted them , Unified in Utils/Constraints.ts Next .

// Constraints.ts
/** *  This file is used to store some constants  */
//  Audio and video equipment 
export const DEVICE_TYPE = {
    
	VIDEO_DEVICE: 'video',
	AUDIO_DEVICE: 'audio',
};

/** *  Call status  */
export const CALL_STATUS_FREE = 0;
export const CALL_STATUS_OFFERING = 1;
export const CALL_STATUS_OFFERED = 2;
export const CALL_STATUS_ANSWERING = 3;
export const CALL_STATUS_CALLING = 4;

/** *  Reply to friend application  */
export const ACCEPT_FRIEND_REQUEST = 2;
export const REJECT_FRIEND_REQUEST = 1;
export const NO_OPERATION_FRIEND_REQUEST = -1;

/** *  Chat system  WebSocket type  Parameters  */
export enum ChatWebSocketType {
    
	UNDEFINED_0, //  Undefined  0  placeholder 
	CHAT_SEND_PRIVATE_MESSAGE, //  Send private chat messages 
	CHAT_READ_MESSAGE, //  Sign for private chat messages 
	CHAT_SEND_FRIEND_REQUEST, //  Send friend request 
	CHAT_ANSWER_FRIEND_REQUEST, //  Respond to friend requests 
	CHAT_PRIVATE_WEBRTC_OFFER, //  Send a video chat request  OFFER
	CHAT_PRIVATE_WEBRTC_ANSWER, //  Respond to video chat requests  ANSWER
	CHAT_PRIVATE_WEBRTC_CANDIDATE, //  Video chat  ICE  candidates 
	CHAT_PRIVATE_WEBRTC_DISCONNECT, //  Disconnect video chat 
	CHAT_PRIVATE_WEBRTC_REQUEST, //  Send a video call request 
	CHAT_PRIVATE_WEBRTC_RESPONSE, //  Respond to video call requests 
}

/** *  Private chat response constant  */
export const PRIVATE_WEBRTC_ANSWER_TYPE = {
    
	NO_USER: -2, //  Users who don't exist 
	REJECT: -1, //  Refuse request 
	BUSY: 0, //  The line is busy 
	ACCEPT: 1, //  Accept the request 
};

// NOTE:  Supported encoders 
const senderCodecs = RTCRtpSender.getCapabilities('video')?.codecs as RTCRtpCodecCapability[];
const receiverCodecs = RTCRtpReceiver.getCapabilities('video')?.codecs as RTCRtpCodecCapability[];
(() => {
    
	const senderH264Index = senderCodecs?.findIndex(
		(c) =>
			c.mimeType === 'video/H264' &&
			c.sdpFmtpLine ===
				'level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=42e01f'
	);
	const senderH264 = (senderCodecs as Array<RTCRtpCodecCapability>)[
		senderH264Index ? senderH264Index : 0
	];
	senderCodecs?.splice(senderH264Index ? senderH264Index : 0, 1);
	senderCodecs?.unshift(senderH264);

	const receiverH264Index = receiverCodecs?.findIndex(
		(c) =>
			c.mimeType === 'video/H264' &&
			c.sdpFmtpLine ===
				'level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=42e01f'
	);
	const receiverH264 = (receiverCodecs as Array<RTCRtpCodecCapability>)[
		receiverH264Index ? receiverH264Index : 0
	];
	receiverCodecs?.splice(receiverH264Index ? receiverH264Index : 0, 1);
	receiverCodecs?.unshift(receiverH264);
})();
export {
     senderCodecs, receiverCodecs };

meanwhile , Because the development language used in this project is TypeScript, For development convenience , I also need to define multiple interface , I put them in Utils/Types.ts in .

// Types.ts
import {
     ReactNode } from 'react';

export interface ChatMessage {
    
	date: number;
	fromId: number;
	id: number;
	message: string;
	toId: number;
	myId?: number;
	userId: number;
}

export interface DeviceInfo {
    
	webLabel?: ReactNode;
	deviceId: string;
	label: string;
}

export interface UserInfo {
    
	email: string;
	exp: number;
	iat: number;
	id: number;
	iss: string;
	profile: string | false;
	role: [
		{
    
			authority: string;
			id: number;
		}
	];
	sub: string;
	username: string;
}

interface ElectronWindow {
    
	captureDesktop: () => Promise<HTMLVideoElement>;
	ipc: {
    
		on: (channel: string, cb: Function) => void;
		once: (channel: string, cb: Function) => void;
		invoke: (channel: string, ...args: any) => Promise<any>;
		removeListener: (channel: string, cb: Function) => void;
		send: (channel: string, ...args: any) => void;
	};
}
declare const window: Window & typeof globalThis & ElectronWindow;
const eWindow = window;
export {
     eWindow };

Some global functions

The rest are scattered functions that are not so easy to classify , They are also often called by various components , Splitting into separate files is a bit too wasteful , So I put them all in one Global.ts among .

Get the modal screen main container

In the project , I introduced it antd UI Component library , And their modal screen components are often used . For beauty , These modal screens often cover only areas other than the top drag bar , therefore , I specifically wrote a function to provide mounting containers for them .

/** *  Used to return  mainContent  Modal screen mask layer mount DOM * @returns Id The value is 'mainContent' Of DOM */
function getMainContent(): HTMLElement {
    
	const content = document.getElementById('mainContent');
	if (content) {
    
		return content;
	} else {
    
		return document.body;
	}
}

token analytic function

In this project , Because the back-end adopts a distributed architecture , So we decided to use Tokenjwt) To save user status . On the client side , I choose to use jwtDecode This module is used to parse jwt .
however , If you use jwtDecode Parsing some is not legal Token String will throw an exception . In order to reduce the try...catch Use of statements , I chose to encapsulate the parsing function as well .

import jwtDecode from 'jwt-decode';
import {
     UserInfo } from './Types';
/** *  Because of direct use  jwtDecode  Illegal parsing  token  Will report a mistake , So package it  * @param {string} token * @returns  The resolved  token */
function decodeJWT(token: string): UserInfo {
    
	try {
    
		return jwtDecode(token);
	} catch (error: any) {
    
		console.log(error);
		return {
    
			email: '',
			exp: 0,
			iat: 0,
			id: 0,
			iss: '',
			profile: false,
			role: [
				{
    
					authority: '',
					id: 0,
				},
			],
			sub: '',
			username: '',
		};
	}
}

Get device stream

In many parts of the client code , You need to get the multimedia stream of the user's multimedia device , So I extract this method as a global function .

import {
     DEVICE_TYPE } from './Constraints';
import store from './Store/store';
import {
     DeviceInfo } from './Types';
/** *  Get device stream function after encapsulation  * @param {string} device  Device type  DEVICE_TYPE * @returns */
async function getDeviceStream(device: string): Promise<MediaStream> {
    
	switch (device) {
    
		case DEVICE_TYPE.AUDIO_DEVICE:
			const audioDevice = store.getState().usingAudioDevice as DeviceInfo;
			const audioConstraints = {
    
				deviceId: {
    
					exact: audioDevice.deviceId,
				},
				noiseSuppression: localStorage.getItem('noiseSuppression') !== 'false',
				echoCancellation: localStorage.getItem('echoCancellation') !== 'false',
			};
			try {
    
				return await navigator.mediaDevices.getUserMedia({
     audio: audioConstraints });
			} catch (e) {
    
				return await getDefaultStream();
			}
		case DEVICE_TYPE.VIDEO_DEVICE:
			const videoDevice = store.getState().usingVideoDevice as DeviceInfo;
			const videoConstraints = {
    
				deviceId: {
    
					exact: videoDevice.deviceId,
				},
				width: 1920,
				height: 1080,
				frameRate: {
    
					max: 30,
				},
			};
			try {
    
				return await navigator.mediaDevices.getUserMedia({
    
					video: videoConstraints,
				});
			} catch (e) {
    
				return await getDefaultStream();
			}
		default:
			return new MediaStream();
	}
}

It will be based on the incoming device type , Back through Promise The corresponding device flow after encapsulation . however , We cannot rule out that some users do not have audio and video devices on their computers , Once a stream cannot be obtained, it will have disastrous consequences . To prevent this from happening , I wrote a function to get the default stream .

let defaultVideoWidget: HTMLVideoElement | undefined;
function getDefaultStream(): Promise<MediaStream> {
    
	return new Promise((resolve) => {
    
		if (defaultVideoWidget) {
    
			resolve((defaultVideoWidget as any).captureStream(1) as MediaStream);
		} else {
    
			defaultVideoWidget = document.createElement('video');
			defaultVideoWidget.autoplay = true;
			defaultVideoWidget.src = '../electronAssets/null.mp4';
			defaultVideoWidget.loop = true;
			defaultVideoWidget.onloadedmetadata = () => {
    
				resolve((defaultVideoWidget as any).captureStream(1) as MediaStream);
			};
		}
	});
}

It will return a default multimedia stream with audio and video tracks when the stream capture fails .

Time conversion function

Time interval function

In order to display different content according to different instant message sending time , I defined a set of functions to get how long it has been since the message was sent .

export const A_SECOND_TIME = 1000;
export const A_MINUTE_TIME = 60 * A_SECOND_TIME;
export const AN_HOUR_TIME = 60 * A_MINUTE_TIME;
export const A_DAY_TIME = 24 * AN_HOUR_TIME;
export const isSameDay = (
	timeStampA: string | number | Date,
	timeStampB: string | number | Date
) => {
    
	const dateA = new Date(timeStampA);
	const dateB = new Date(timeStampB);
	return dateA.setHours(0, 0, 0, 0) === dateB.setHours(0, 0, 0, 0);
};
export const isSameWeek = (
	timeStampA: string | number | Date,
	timeStampB: string | number | Date
) => {
    
	let A = new Date(timeStampA).setHours(0, 0, 0, 0);
	let B = new Date(timeStampB).setHours(0, 0, 0, 0);
	const timeDistance = Math.abs(A - B);
	return timeDistance / A_DAY_TIME;
};
export const isSameYear = (
	timeStampA: string | number | Date,
	timeStampB: string | number | Date
) => {
    
	const dateA = new Date(timeStampA);
	const dateB = new Date(timeStampB);
	dateA.setHours(0, 0, 0, 0);
	dateB.setHours(0, 0, 0, 0);
	dateA.setMonth(0, 1);
	dateB.setMonth(0, 1);
	return dateA.getFullYear() === dateB.getFullYear();
};

Week number to Chinese character function

meanwhile , It also defines a function that converts the number of weeks into Chinese .

export const translateDayNumberToDayChara = (day: any) => {
    
	if (typeof day === 'number') {
    
		day = day % 7;
	}
	switch (day) {
    
		case 0:
			return ' Sunday ';
		case 1:
			return ' Monday ';
		case 2:
			return ' Tuesday ';
		case 3:
			return ' Wednesday ';
		case 4:
			return ' Thursday ';
		case 5:
			return ' Friday ';
		case 6:
			return ' Saturday ';
		default:
			return String(day);
	}
};

Desktop capture function

In addition to basic audio and video device streaming , Our project also needs to implement the capture of user desktop . But in electron in , We can't use navigator.mediaDevices.getDisplayMedia This API , Therefore, we need to implement the corresponding functions by ourselves .
First , We need to be in electron In the main process of the desktopCapture The module captures the user's desktop .

// main.js
const {
     desktopCapturer } = require('electron');
const ipc = require('electron').ipcMain;

ipc.handle('DESKTOP_CAPTURE', () => {
    
	return new Promise(async (resolve, reject) => {
    
		try {
    
			const sources = await desktopCapturer.getSources({
     types: ['screen'] });
			resolve(sources[0]);
		} catch (err) {
    
			reject(err);
		}
	});
});

And in the Global.ts in , I need to write the following code :

function getDesktopStream(): Promise<MediaStream> {
    
	return new Promise((resolve) => {
    
		eWindow.ipc.invoke('DESKTOP_CAPTURE').then((source) => {
    
			(navigator as any).mediaDevices
				.getUserMedia({
    
					audio: {
    
						mandatory: {
    
							chromeMediaSource: 'desktop',
							chromeMediaSourceId: source.id,
						},
					},
					video: {
    
						mandatory: {
    
							chromeMediaSource: 'desktop',
							chromeMediaSourceId: source.id,
						},
					},
				})
				.then((stream: MediaStream) => {
    
					resolve(stream);
				});
		});
	});
}

We will in this way , Realize the interception of the user's desktop .

原网站

版权声明
本文为[What does Xiao Li Mao eat today]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/173/202206221441207925.html