当前位置:网站首页>Audio audiorecord create (I)
Audio audiorecord create (I)
2022-07-01 08:34:00 【Cmatrix204】
AudioRecord create Related code flow 、 Focus on key interfaces 、 Parameter analysis and related log share Analysis and comparison .
1.frameworks/base/media/java/android/media/AudioRecord.java
Argument parsing :
audioSource: MediaRecorder.AudioSource.MIC, Detailed definition requires check MediaRecorder.AudioSource;
sampleRateInHz: Default sample rate , Company Hz,44100Hz It is the only sampling rate that can guarantee the operation on all equipment ;
channelConfig: Describe the audio channel settings ,AudioFormat.CHANNEL_CONFIGURATION_MONO Guarantee to work on all equipment ;
audioFormat: Audio data is guaranteed to support this format , I'm going to set it to AudioFormat.ENCODING_16BIT;
bufferSizeInBytes: During the recording , The number of bytes of audio data written to the buffer ,getMinVufferSize() The value obtained .

The main parsing process is native_setup Processing in function , It should be noted that it is different from different audio Called session Id.
@SystemApi
public AudioRecord(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,
int sessionId) throws IllegalArgumentException {
mRecordingState = RECORDSTATE_STOPPED;
...
///*add by shengjie for parse AudioRecorder
//1.REMOTE_SUBMIX How is this sound source handled , No follow , You can follow up if there is a project need
//*/
// is this AudioRecord using REMOTE_SUBMIX at full volume?
if (attributes.getCapturePreset() == MediaRecorder.AudioSource.REMOTE_SUBMIX) {
final AudioAttributes.Builder filteredAttr = new AudioAttributes.Builder();
final Iterator<String> tagsIter = attributes.getTags().iterator();
while (tagsIter.hasNext()) {
final String tag = tagsIter.next();
if (tag.equalsIgnoreCase(SUBMIX_FIXED_VOLUME)) {
mIsSubmixFullVolume = true;
Log.v(TAG, "Will record from REMOTE_SUBMIX at full fixed volume");
} else { // SUBMIX_FIXED_VOLUME: is not to be propagated to the native layers
filteredAttr.addTag(tag);
}
}
filteredAttr.setInternalCapturePreset(attributes.getCapturePreset());
mAudioAttributes = filteredAttr.build();
} else {
mAudioAttributes = attributes;
}
...
///*add by shengjie for parse AudioRecord
//2.audioParamCheck Check the validity of parameters , assignment mRecordSource、mSampleRate、mAudioFormat
//*/
audioParamCheck(attributes.getCapturePreset(), rate, encoding);
///*add by shengjie for parse AudioRecord
//3. Get the number of channels and the channel mask , The mono mask is 0x10, The dual channel mask is 0x0c;
//*/
if ((format.getPropertySetMask()
& AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_CHANNEL_INDEX_MASK) != 0) {
mChannelIndexMask = format.getChannelIndexMask();
mChannelCount = format.getChannelCount();
}
if ((format.getPropertySetMask()
& AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_CHANNEL_MASK) != 0) {
mChannelMask = getChannelMaskFromLegacyConfig(format.getChannelMask(), false);
mChannelCount = format.getChannelCount();
} else if (mChannelIndexMask == 0) {
mChannelMask = getChannelMaskFromLegacyConfig(AudioFormat.CHANNEL_IN_DEFAULT, false);
mChannelCount = AudioFormat.channelCountFromInChannelMask(mChannelMask);
}
audioBuffSizeCheck(bufferSizeInBytes);
int[] sampleRate = new int[] {mSampleRate};
///*add by shengjie for parse AudioRecord
// One Session It's just a conversation , Each session has a unique Id To mark . The Id The final management of is AudioFlinger in .
// A session can be used by more than one AudioTrack Objects and MediaPlayer share .
// Share one Session Of AudioTrack and MediaPlayer Share the same AudioEffect( Sound effect ).
//*/
int[] session = new int[1];
session[0] = sessionId;
//TODO: update native initialization when information about hardware init failure
// due to capture device already open is available.
int initResult = native_setup( new WeakReference<AudioRecord>(this),
mAudioAttributes, sampleRate, mChannelMask, mChannelIndexMask,
mAudioFormat, mNativeBufferSizeInBytes,
session, getCurrentOpPackageName(), 0 /*nativeRecordInJavaObj*/);
if (initResult != SUCCESS) {
loge("Error code "+initResult+" when initializing native AudioRecord object.");
return; // with mState == STATE_UNINITIALIZED
}
mSampleRate = sampleRate[0];
mSessionId = session[0];
mState = STATE_INITIALIZED;
}
2.frameworks/base/core/jni/android_media_AudioRecord.cpp
Main work : call AudioRecord.cpp Of set The function interface , And set up
audiorecord_callback_cookie *lpCallbackData Callback is used to transmit audio data buffer.
static jint
android_media_AudioRecord_setup(JNIEnv *env, jobject thiz, jobject weak_this,
jobject jaa, jintArray jSampleRate, jint channelMask, jint channelIndexMask,
jint audioFormat, jint buffSizeInBytes, jintArray jSession, jstring opPackageName,
jlong nativeRecordInJavaObj)
{
//ALOGV(">> Entering android_media_AudioRecord_setup");
//ALOGV("sampleRate=%d, audioFormat=%d, channel mask=%x, buffSizeInBytes=%d "
// "nativeRecordInJavaObj=0x%llX",
// sampleRateInHertz, audioFormat, channelMask, buffSizeInBytes, nativeRecordInJavaObj);
...
sp<AudioRecord> lpRecorder = 0;
audiorecord_callback_cookie *lpCallbackData = NULL;
// if we pass in an existing *Native* AudioRecord, we don't need to create/initialize one.
if (nativeRecordInJavaObj == 0) {
if (jaa == 0) {
ALOGE("Error creating AudioRecord: invalid audio attributes");
return (jint) AUDIO_JAVA_ERROR;
}
if (jSampleRate == 0) {
ALOGE("Error creating AudioRecord: invalid sample rates");
return (jint) AUDIO_JAVA_ERROR;
}
///*add by shengjie for parse AudioRecord setup
//1.got sample Rate from new AlsaRecord(44100hz/16khz)
jint elements[1];
env->GetIntArrayRegion(jSampleRate, 0, 1, elements);
int sampleRateInHertz = elements[0];
//*/
// channel index mask takes priority over channel position masks.
if (channelIndexMask) {
// Java channel index masks need the representation bits set.
localChanMask = audio_channel_mask_from_representation_and_bits(
AUDIO_CHANNEL_REPRESENTATION_INDEX,
channelIndexMask);
}
// Java channel position masks map directly to the native definition
if (!audio_is_input_channel(localChanMask)) {
ALOGE("Error creating AudioRecord: channel mask %#x is not valid.", localChanMask);
return (jint) AUDIORECORD_ERROR_SETUP_INVALIDCHANNELMASK;
}
///*add by shengjie for parse AudioRecord setup
//2.got channel count from new AlsaRecord(1/2/4 ch)
uint32_t channelCount = audio_channel_count_from_in_mask(localChanMask);
//*/
// compare the format against the Java constants
///*add by shengjie for parse AudioRecord setup
//3.got audio format from new AlsaRecord(16bit/32bit)
audio_format_t format = audioFormatToNative(audioFormat);
if (format == AUDIO_FORMAT_INVALID) {
ALOGE("Error creating AudioRecord: unsupported audio format %d.", audioFormat);
return (jint) AUDIORECORD_ERROR_SETUP_INVALIDFORMAT;
}
//*/
///*add by shengjie for parse AudioRecord setup
size_t bytesPerSample = audio_bytes_per_sample(format);
if (buffSizeInBytes == 0) {
ALOGE("Error creating AudioRecord: frameCount is 0.");
return (jint) AUDIORECORD_ERROR_SETUP_ZEROFRAMECOUNT;
}
//4.1 Size of each sampling frame : Number of all channels * Number of bytes per frame
size_t frameSize = channelCount * bytesPerSample;
//4.2 Number of sampling frames : Minimum buffer size / Size of each sampling frame
size_t frameCount = buffSizeInBytes / frameSize;
//*/
ScopedUtfChars opPackageNameStr(env, opPackageName);
// create an uninitialized AudioRecord object
lpRecorder = new AudioRecord(String16(opPackageNameStr.c_str()));
// read the AudioAttributes values
auto paa = JNIAudioAttributeHelper::makeUnique();
jint jStatus = JNIAudioAttributeHelper::nativeFromJava(env, jaa, paa.get());
if (jStatus != (jint)AUDIO_JAVA_SUCCESS) {
return jStatus;
}
ALOGV("AudioRecord_setup for source=%d tags=%s flags=%08x", paa->source, paa->tags, paa->flags);
audio_input_flags_t flags = AUDIO_INPUT_FLAG_NONE;
if (paa->flags & AUDIO_FLAG_HW_HOTWORD) {
flags = AUDIO_INPUT_FLAG_HW_HOTWORD;
}
// create the callback information:
// this data will be passed with every AudioRecord callback
lpCallbackData = new audiorecord_callback_cookie;
lpCallbackData->audioRecord_class = (jclass)env->NewGlobalRef(clazz);
// we use a weak reference so the AudioRecord object can be garbage collected.
///*add by shengjie for parse AudioRecord setup
//5.AudioRecord.java The pointer to is bound to lpCallbackData Callback data , In this way, the data can be notified to the upper layer through callback
lpCallbackData->audioRecord_ref = env->NewGlobalRef(weak_this);
lpCallbackData->busy = false;
//*/
///*add by shengjie for AudioRecord setup
//6. call AudioRecord.cpp Of set function ,flags, His type is audio_input_flags_t, It's defined in audio-base.h in , As a sign of audio input , Here is AUDIO_INPUT_FLAG_NONE, No special settings
const status_t status = lpRecorder->set(paa->source,
sampleRateInHertz,
format, // word length, PCM
localChanMask,
frameCount,
recorderCallback,// callback_t
lpCallbackData,// void* user
0, // notificationFrames,
true, // threadCanCallJava
sessionId,
AudioRecord::TRANSFER_DEFAULT,
flags,
-1, -1, // default uid, pid
paa.get());
//*/
...
// Set caller name so it can be logged in destructor.
// MediaMetricsConstants.h: AMEDIAMETRICS_PROP_CALLERNAME_VALUE_JAVA
lpRecorder->setCallerName("java");
} else { // end if nativeRecordInJavaObj == 0)
lpRecorder = (AudioRecord*)nativeRecordInJavaObj;
// TODO: We need to find out which members of the Java AudioRecord might need to be
...
// create the callback information:
// this data will be passed with every AudioRecord callback
lpCallbackData = new audiorecord_callback_cookie;
lpCallbackData->audioRecord_class = (jclass)env->NewGlobalRef(clazz);
// we use a weak reference so the AudioRecord object can be garbage collected.
lpCallbackData->audioRecord_ref = env->NewGlobalRef(weak_this);
lpCallbackData->busy = false;
}
nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
if (nSession == NULL) {
ALOGE("Error creating AudioRecord: Error retrieving session id pointer");
goto native_init_failure;
}
// read the audio session ID back from AudioRecord in case a new session was created during set()
nSession[0] = lpRecorder->getSessionId();
env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);
nSession = NULL;
...
///*add by shengjie for parse AudioRecord setup
//7. hold lpRecorder Objects and lpCallbackData The callback is saved to javaAudioRecordFields In the corresponding fields of
// save our newly created C++ AudioRecord in the "nativeRecorderInJavaObj" field
// of the Java object
setAudioRecord(env, thiz, lpRecorder);
// save our newly created callback information in the "nativeCallbackCookie" field
// of the Java object (in mNativeCallbackCookie) so we can free the memory in finalize()
env->SetLongField(thiz, javaAudioRecordFields.nativeCallbackCookie, (jlong)lpCallbackData);
//*/
return (jint) AUDIO_JAVA_SUCCESS;
...///*Error Handler Process*///...
}system/media/audio/include/system/audio-base.h
typedef enum {
AUDIO_INPUT_FLAG_NONE = 0x0,
AUDIO_INPUT_FLAG_FAST = 0x1,
AUDIO_INPUT_FLAG_HW_HOTWORD = 0x2,
AUDIO_INPUT_FLAG_RAW = 0x4,
AUDIO_INPUT_FLAG_SYNC = 0x8,
AUDIO_INPUT_FLAG_MMAP_NOIRQ = 0x10,
AUDIO_INPUT_FLAG_VOIP_TX = 0x20,
AUDIO_INPUT_FLAG_HW_AV_SYNC = 0x40,
AUDIO_INPUT_FLAG_DIRECT = 0x80,
} audio_input_flags_t;
3.frameworks/av/media/libaudioclient/AudioRecord.cpp
Analyze relevant audio parameters , Mainly to create AudioRecordThread and createRecord_l
status_t AudioRecord::set(
audio_source_t inputSource,
uint32_t sampleRate,
audio_format_t format,
audio_channel_mask_t channelMask,
size_t frameCount,
callback_t cbf,
void* user,
uint32_t notificationFrames,
bool threadCanCallJava,
audio_session_t sessionId,
transfer_type transferType,
audio_input_flags_t flags,
uid_t uid,
pid_t pid,
const audio_attributes_t* pAttributes,
audio_port_handle_t selectedDeviceId,
audio_microphone_direction_t selectedMicDirection,
float microphoneFieldDimension)
{
status_t status = NO_ERROR;
uint32_t channelCount;
pid_t callingPid;
pid_t myPid;
// Note mPortId is not valid until the track is created, so omit mPortId in ALOG for set.
ALOGV("%s(): inputSource %d, sampleRate %u, format %#x, channelMask %#x, frameCount %zu, "
"notificationFrames %u, sessionId %d, transferType %d, flags %#x, opPackageName %s "
"uid %d, pid %d",
__func__,
inputSource, sampleRate, format, channelMask, frameCount, notificationFrames,
sessionId, transferType, flags, String8(mOpPackageName).string(), uid, pid);
mTracker.reset(new RecordingActivityTracker());
mSelectedDeviceId = selectedDeviceId;
mSelectedMicDirection = selectedMicDirection;
mSelectedMicFieldDimension = microphoneFieldDimension;
...
///*add by shengjie for parse AudioRecord set
//1.get trabsferType TRANSFER_SYNC/TRANSFER_CALLBACK
mTransfer = transferType;
//*/
...
if (pAttributes == NULL) {
mAttributes = AUDIO_ATTRIBUTES_INITIALIZER;
mAttributes.source = inputSource;
if (inputSource == AUDIO_SOURCE_VOICE_COMMUNICATION
|| inputSource == AUDIO_SOURCE_CAMCORDER) {
mAttributes.flags |= AUDIO_FLAG_CAPTURE_PRIVATE;
}
} else {
// stream type shouldn't be looked at, this track has audio attributes
memcpy(&mAttributes, pAttributes, sizeof(audio_attributes_t));
ALOGV("%s(): Building AudioRecord with attributes: source=%d flags=0x%x tags=[%s]",
__func__, mAttributes.source, mAttributes.flags, mAttributes.tags);
}
mSampleRate = sampleRate;
// these below should probably come from the audioFlinger too...
if (format == AUDIO_FORMAT_DEFAULT) {
format = AUDIO_FORMAT_PCM_16_BIT;
}
// validate parameters
// AudioFlinger capture only supports linear PCM
if (!audio_is_valid_format(format) || !audio_is_linear_pcm(format)) {
ALOGE("%s(): Format %#x is not linear pcm", __func__, format);
status = BAD_VALUE;
goto exit;
}
mFormat = format;
if (!audio_is_input_channel(channelMask)) {
ALOGE("%s(): Invalid channel mask %#x", __func__, channelMask);
status = BAD_VALUE;
goto exit;
}
mChannelMask = channelMask;
channelCount = audio_channel_count_from_in_mask(channelMask);
mChannelCount = channelCount;
if (audio_is_linear_pcm(format)) {
mFrameSize = channelCount * audio_bytes_per_sample(format);
} else {
mFrameSize = sizeof(uint8_t);
}
// mFrameCount is initialized in createRecord_l
mReqFrameCount = frameCount;
mNotificationFramesReq = notificationFrames;
// mNotificationFramesAct is initialized in createRecord_l
mSessionId = sessionId;
ALOGV("%s(): mSessionId %d", __func__, mSessionId);
callingPid = IPCThreadState::self()->getCallingPid();
myPid = getpid();
if (uid == AUDIO_UID_INVALID || (callingPid != myPid)) {
mClientUid = IPCThreadState::self()->getCallingUid();
} else {
mClientUid = uid;
}
if (pid == -1 || (callingPid != myPid)) {
mClientPid = callingPid;
} else {
mClientPid = pid;
}
mOrigFlags = mFlags = flags;
mCbf = cbf;
///*add by shengjie for parse AudioRecord set
//2.new AudioRecordThread to handle releated process
if (cbf != NULL) {
mAudioRecordThread = new AudioRecordThread(*this);
mAudioRecordThread->run("AudioRecord", ANDROID_PRIORITY_AUDIO);
// thread begins in paused state, and will not reference us until start()
}
//*/
// create the IAudioRecord
///*add by shengjie for parse AudioRecord set
//3.use createRecord_l to invork audio related devices/input parameters
{
AutoMutex lock(mLock);
status = createRecord_l(0 /*epoch*/, mOpPackageName);
}
//*/
ALOGV("%s(%d): status %d", __func__, mPortId, status);
if (status != NO_ERROR) {
if (mAudioRecordThread != 0) {
mAudioRecordThread->requestExit(); // see comment in AudioRecord.h
mAudioRecordThread->requestExitAndWait();
mAudioRecordThread.clear();
}
goto exit;
}
mUserData = user;
// TODO: add audio hardware input latency here
mLatency = (1000LL * mFrameCount) / mSampleRate;
mMarkerPosition = 0;
mMarkerReached = false;
mNewPosition = 0;
mUpdatePeriod = 0;
AudioSystem::acquireAudioSessionId(mSessionId, mClientPid, mClientUid);
mSequence = 1;
mObservedSequence = mSequence;
mInOverrun = false;
mFramesRead = 0;
mFramesReadServerOffset = 0;
///*error handler process*///
}createRecord_l The method is implemented as follows :
Important interfaces :AudioFliger.cpp createRecord_l Subsequently, a series of parameters will be parsed and distributed ;
// must be called with mLock held
status_t AudioRecord::createRecord_l(const Modulo<uint32_t> &epoch, const String16& opPackageName)
{
const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
IAudioFlinger::CreateRecordInput input;
IAudioFlinger::CreateRecordOutput output;
sp<media::IAudioRecord> record;
...
///*loding input related parameters*/
///*add by shengjie for parse AudioRecord createRecord_l
//1.IAudioFliger.cpp adopt Binder Interprocess communication call AudioFliger.cpp Of createRecord, Undertaking including audio input device structure getInputForAttr、createRecordTrack_l、getOrphanEffectChain_l And all other functions .
record = audioFlinger->createRecord(input,
output,
&status);
//*/
...
// Starting address of buffers in shared memory.
// The buffers are either immediately after the control block,
// or in a separate area at discretion of server.
void *buffers;
if (output.buffers == 0) {
buffers = cblk + 1;
} else {
// TODO: Using unsecurePointer() has some associated security pitfalls
// (see declaration for details).
// Either document why it is safe in this case or address the
// issue (e.g. by copying).
buffers = output.buffers->unsecurePointer();
if (buffers == NULL) {
ALOGE("%s(%d): Could not get buffer pointer", __func__, mPortId);
status = NO_INIT;
goto exit;
}
}
// invariant that mAudioRecord != 0 is true only after set() returns successfully
if (mAudioRecord != 0) {
IInterface::asBinder(mAudioRecord)->unlinkToDeath(mDeathNotifier, this);
mDeathNotifier.clear();
}
mAudioRecord = record;
mCblkMemory = output.cblk;
mBufferMemory = output.buffers;
IPCThreadState::self()->flushCommands();
mCblk = cblk;
...
// update proxy
mProxy = new AudioRecordClientProxy(cblk, buffers, mFrameCount, mFrameSize);
mProxy->setEpoch(epoch);
mProxy->setMinimum(mNotificationFramesAct);
...
}
4.frameworks/av/media/libaudioclient/IAudioFlinger.cpp
virtual sp<media::IAudioRecord> createRecord(const CreateRecordInput& input,
CreateRecordOutput& output,
status_t *status)
{
Parcel data, reply;
sp<media::IAudioRecord> record;
data.writeInterfaceToken(IAudioFlinger::getInterfaceDescriptor());
if (status == nullptr) {
return record;
}
input.writeToParcel(&data);
///*add by shengjie for IAudioFlinger createRecord
// adopt Binder Interprocess communication gets share buffer
status_t lStatus = remote()->transact(CREATE_RECORD, data, &reply);
if (lStatus != NO_ERROR) {
ALOGE("createRecord transaction error %d", lStatus);
*status = DEAD_OBJECT;
return record;
}
//*/
*status = reply.readInt32();
if (*status != NO_ERROR) {
ALOGE("createRecord returned error %d", *status);
return record;
}
record = interface_cast<media::IAudioRecord>(reply.readStrongBinder());
if (record == 0) {
ALOGE("createRecord returned a NULL IAudioRecord with status OK");
*status = DEAD_OBJECT;
return record;
}
output.readFromParcel(&reply);
return record;
}
5.frameworks/av/services/audioflinger/AudioFlinger.cpp
getInputForAttr: Set up audio input Relevant information corresponds to audio device
checkRecordThread_l: establish recordThread To do recording Related configuration
getOrphanEffectChain_l: Issue the corresponding different gains
sp<media::IAudioRecord> AudioFlinger::createRecord(const CreateRecordInput& input,
CreateRecordOutput& output,
status_t *status)
{
sp<RecordThread::RecordTrack> recordTrack;
sp<RecordHandle> recordHandle;
sp<Client> client;
status_t lStatus;
audio_session_t sessionId = input.sessionId;
audio_port_handle_t portId = AUDIO_PORT_HANDLE_NONE;
...
///*add by shengjie for parse AudioFlinger createRecord
//parse input related info and check all audio devices info
lStatus = AudioSystem::getInputForAttr(&input.attr, &output.inputId,
input.riid,
sessionId,
// FIXME compare to AudioTrack
clientPid,
clientUid,
input.opPackageName,
&input.config,
output.flags, &output.selectedDeviceId, &portId);
//*/
{
Mutex::Autolock _l(mLock);
RecordThread *thread = checkRecordThread_l(output.inputId);
if (thread == NULL) {
ALOGE("createRecord() checkRecordThread_l failed, input handle %d", output.inputId);
lStatus = BAD_VALUE;
goto Exit;
}
...
///*add by shengjie for AudioFlinger createRecord
//2.create Record Tract to check recording audio buffer and data
recordTrack = thread->createRecordTrack_l(client, input.attr, &output.sampleRate,
input.config.format, input.config.channel_mask,
&output.frameCount, sessionId,
&output.notificationFrameCount,
callingPid, clientUid, &output.flags,
input.clientInfo.clientTid,
&lStatus, portId,
input.opPackageName);
//*/
...
// Check if one effect chain was awaiting for an AudioRecord to be created on this
// session and move it to this thread.
///*add by shengjie for parse AudioFlinger createRecord
//3.enable audio input channer related effect
sp<EffectChain> chain = getOrphanEffectChain_l(sessionId);
//*/
if (chain != 0) {
Mutex::Autolock _l(thread->mLock);
thread->addEffectChain_l(chain);
}
break;
}
// End of retry loop.
// The lack of indentation is deliberate, to reduce code churn and ease merges.
}
output.cblk = recordTrack->getCblk();
output.buffers = recordTrack->getBuffers();
output.portId = portId;
// return handle to client
recordHandle = new RecordHandle(recordTrack);
...
*status = lStatus;
return recordHandle;
}
summary :
The following article will be helpful to AudioFlinger createRecord It's about getInputForAttr、checkRecordThread_l、getOrphanEffectChain_l Interface for more detailed analysis .
Tracking the process , The purpose is to insert some items about audio microphone array under this process 、 Some special configurations and requirements for data processing , In the overall macro session、client Under control, take some other special channels to meet the needs of the project .
边栏推荐
- Provincial election + noi Part VII computational geometry
- Luogu p1088 [noip2004 popularization group] Martians
- Review of week 280 of leetcode
- Precautions and skills in using regular expressions in golang
- 【C】 Summary of wrong questions in winter vacation
- The use of word in graduation thesis
- 01 NumPy介绍
- Burpsuite -- brute force cracking of intruder
- How to recruit Taobao anchor suitable for your own store
- 《单片机原理与应用》——并行IO口原理
猜你喜欢

The use of word in graduation thesis

使用threejs简单Web3D效果

OJ输入输出练习

Internet of things technology is widely used to promote intelligent water automation management

P4 installation bmv2 detailed tutorial

Qt的模型与视图

Properties of 15MnNiNbDR low temperature vessel steel, Wugang 15MnNiDR and 15MnNiNbDR steel plates

《单片机原理与应用》——并行IO口原理

View drawing process analysis
![[detailed explanation of Huawei machine test] judgment string subsequence [2022 Q1 Q2 | 200 points]](/img/0f/972cde8c749e7b53159c9d9975c9f5.png)
[detailed explanation of Huawei machine test] judgment string subsequence [2022 Q1 Q2 | 200 points]
随机推荐
使用threejs简单Web3D效果
Properties of 15MnNiNbDR low temperature vessel steel, Wugang 15MnNiDR and 15MnNiNbDR steel plates
Leetcode t34: find the first and last positions of elements in a sorted array
01 numpy introduction
Utiliser Beef pour détourner le navigateur utilisateur
《微机原理》—总线及其形成
XX攻击——反射型 XSS 攻击劫持用户浏览器
Yolov5 advanced six target tracking environment construction
Airsim雷达相机融合生成彩色点云
《MATLAB 神经网络43个案例分析》:第30章 基于随机森林思想的组合分类器设计——乳腺癌诊断
手工挖XSS漏洞
Precautions and skills in using regular expressions in golang
[no title] free test questions for constructor municipal direction general foundation (constructor) and theoretical test for constructor municipal direction general foundation (constructor) in 2022
There are many problems in sewage treatment, and the automatic control system of pump station is solved in this way
Airsim radar camera fusion to generate color point cloud
2022 ordinary scaffolder (special type of construction work) examination question bank and the latest analysis of ordinary scaffolder (special type of construction work)
The era of low threshold programmers is gone forever behind the sharp increase in the number of school recruitment for Internet companies
To prevent "activation" photos from being muddled through, databao "live detection + face recognition" makes face brushing safer
使用beef劫持用戶瀏覽器
When using charts to display data, the time field in the database is repeated. How to display the value at this time?