当前位置:网站首页>Cannot find a valid baseurl for repo: HDP-3.1-repo-1
Cannot find a valid baseurl for repo: HDP-3.1-repo-1
2022-07-26 22:40:00 【开着拖拉机回家】

目录
一、版本信息

安装报错信息
stderr:
Failed to execute command: rpm -qa | grep smartsense- || yum -y install smartsense-hst || rpm -i /var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SMARTSENSE/package/files/rpm/*.rpm; Exit code: 1; stdout: 已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
; stderr:
One of the configured repositories failed (未知),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo= ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable
or
subscription-manager repos --disable=
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=.skip_if_unavailable=true
Cannot find a valid baseurl for repo: HDP-3.1-repo-1
错误:文件未找到,所用 glob: /var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SMARTSENSE/package/files/rpm/*.rpmambari-hdp-1.repo 内容如下
[HDP-3.1-repo-1]
name=HDP-3.1-repo-1
baseurl=
path=/
enabled=1
gpgcheck=0
[HDP-UTILS-1.1.0.22-repo-1]
name=HDP-UTILS-1.1.0.22-repo-1
baseurl=
path=/
enabled=1
gpgcheck=0二、解决方式
地址:
Root cause : https://issues.apache.org/jira/browse/AMBARI-25069
Workaround :
This is a Javascript bug in ambari that happens when using local repository and there is no internet access to cluster
to workaround this
Steps
1) go to /usr/lib/ambari-server/web/javascipts
cd /usr/lib/ambari-server/web/javascripts
2) take backup of app.js
cp app.js app.js_backup
3) edit the app.js
find out the line(39892) : onNetworkIssuesExist: function () {
Change the line from :
/**
* Use Local Repo if some network issues exist
*/
onNetworkIssuesExist: function () {
if (this.get('networkIssuesExist')) {
this.get('content.stacks').forEach(function (stack) {
stack.setProperties({
usePublicRepo: false,
useLocalRepo: true
});
stack.cleanReposBaseUrls();
});
}
}.observes('networkIssuesExist'),
to
/**
* Use Local Repo if some network issues exist
*/
onNetworkIssuesExist: function () {
if (this.get('networkIssuesExist')) {
this.get('content.stacks').forEach(function (stack) {
if(stack.get('useLocalRepo') != true){
stack.setProperties({
usePublicRepo: false,
useLocalRepo: true
});
stack.cleanReposBaseUrls();
}
});
}
}.observes('networkIssuesExist'),
as per : https://github.com/apache/ambari/pull/2743/files
Later as you have already deployed the cluster we need to reset the cluster (Caution : this will erase all the configs you have created previously in Step6 and also the Hosts and services you have selected need to select again )
Command :
ambari-server reset
And hard reload the page and start the create cluster wizard again.
Incase you have already at Step 9 and cannot proceed with ambari-server reset (as it invovles lots of Configs being added again , the below steps are for you )
Preqrequesties : The cluster now is in Deployment step(step 9 ) and you have only retry button to press
steps
1) Stop ambari-server
2) login to Database
3) use the below command to list out all the contents in repo_definition table :
select * from repo_definition;
4) you can see the base_url will be empty for the all the Rows in the table
5) Correct the base_url for every rows and update it using the command :
update repo_definition set base_url='<YOUR BASE URL>' where id=<THE CORESPONDING ID>;
for ex :
update repo_definition set base_url='http://asnaik.example.com/localrepo/HDP-3.1' where id=9;
6) after correcting all the base_url columns in repo_definition table and also delete the empty repos created by ambari from location /etc/yum.repos.d
7) start ambari, Login to UI and press retry button, The Installation will work as smooth as it can be.
Hope this helps.
我执行到了 最后一个步骤, 只能 retry

以下是我的三个BaseURL
http://hadoop-ambari/HDP/centos7/3.1.0.0-78
http://hadoop-ambari/HDP-GPL/centos7/3.1.0.0-78
http://hadoop-ambari/HDP-UTILS/centos7/1.1.0.22
停止 ambari-server 后,我们 直接修改 repo_definition 表

-------------------------------- 感谢点赞! -------------------------------------
边栏推荐
- Checked status in El checkbox 2021-08-02
- One of the Flink requirements - sideoutput (Application of side output flow: output the temperature higher than 30 ℃ to the mainstream, and output the temperature lower than 30 ℃ to the side flow)
- 2022.7.10DAY602
- [SQL注入] 扩展注入手法
- Flink checkpoint源码理解
- 通过FlinkCDC将MySQL中变更的数据写入到kafka(DataStream方式)
- redis——缓存雪崩、缓存穿透、缓存击穿
- One of the Flink requirements - processfunction (requirement: alarm if the temperature rises continuously within 30 seconds)
- JSCORE day_ 05(7.6)
- Write the changed data in MySQL to Kafka through flinkcdc (datastream mode)
猜你喜欢
随机推荐
[CISCN2019 华北赛区 Day1 Web2]ikun
2022.7.10DAY602
[NCTF2019]SQLi
Isolation level of MySQL database transactions (detailed explanation)
Flink 滑动窗口理解&具体业务场景介绍
SparkSql之编程方式
当事务遇上分布式锁
Application of encoding in XSS
Essay - I say you are so cute
[b01lers2020]Welcome to Earth
Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=
JSCORE day_ 03(7.4)
Spark source code learning - memory tuning
Golang implements AES with five encryption mode functions, encrypt encryption and decryption string output
BUUCTF-随便注、Exec、EasySQL、Secret File
JSCORE day_ 01(6.30) RegExp 、 Function
One of the Flink requirements - sideoutput (Application of side output flow: output the temperature higher than 30 ℃ to the mainstream, and output the temperature lower than 30 ℃ to the side flow)
数据库表连接的简单解释
Only hard work, hard work and hard work are the only way out C - patient entity class
MySQL Part 2

![[问题]yum资源被占用怎么办](/img/8d/50129fa1b1ef0aa0e968e6e6f20969.png)
![[b01lers2020]Welcome to Earth](/img/e7/c8c0427b95022fbdf7bf2128c469c0.png)
![[NPUCTF2020]ezinclude](/img/24/ee1a6d49a74ce09ec721c1a3b5dce4.png)


![[CISCN2019 总决赛 Day2 Web1]Easyweb](/img/36/1ca4b6cae4e0dda0916b511d4bcd9f.png)

![[hongminggu CTF 2021] write_ shell](/img/f5/c3a771ab7b40311e37a056defcbd78.png)
