Following several requests from users over the past eight years [0],
we are finally enabling access to ToolsDB's "public" databases (the
ones with a name ending with "_p") [1] from both Quarry [2] and
Superset [3].
The data stored in those databases have always been accessible to
every Toolforge user, but after this change they will become more
broadly accessible, as Quarry can be accessed by anyone with a
Wikimedia account, and saved queries in Quarry can be shared with
public links that require no login at all.
== This change is planned to go live on Monday, July 1st. ==
If you have any concerns or questions related to this change, please
leave a comment in the Phabricator task or one of its subtasks. [0]
Thanks to everyone for your patience and for keeping the task alive
over the years!
[0] https://phabricator.wikimedia.org/T151158
[1] https://wikitech.wikimedia.org/wiki/Help:Toolforge/Database#Privileges_on_t…
[2] https://meta.wikimedia.org/wiki/Research:Quarry
[3] https://superset.wmcloud.org/
--
Francesco Negri (he/him) -- IRC: dhinus
Site Reliability Engineer, Cloud Services team
Wikimedia Foundation
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Dear all,
Has anyone figured out how to use Toolforge Build Service for a tool (job +
webservice) written in golang?
I'm trying to switch qrank.toolforge.org to Toolforge Build Service, but
I'm struggling with it.
To debug this, I wrote a minimal job that just prints a hello message to
stdout:
https://github.com/brawer/wikidata-qrank/blob/main/cmd/hello/main.go
Because Toolforge is emulating Heroku, it appears necessary to tell
buildpack which binaries should get packaged into the container. The "//
+heroku install" comment on line 14 of the following go.mod file seems to
do the trick, together with listing the binaries again in a Heroku Procfile:
https://github.com/brawer/wikidata-qrank/blob/main/go.mod#L14https://github.com/brawer/wikidata-qrank/blob/main/Procfile
When running the following command on login.toolforge.org, the project
seems to be built just fine:
$ toolforge build start https://github.com/brawer/wikidata-qrank
Some interesting log message from build service:
[step-build] 2024-04-30T18:13:31.170521374Z Building packages:
[step-build] 2024-04-30T18:13:31.170585858Z - ./cmd/hello
[step-build] 2024-04-30T18:13:31.170599159Z - ./cmd/qrank-builder
[step-build] 2024-04-30T18:13:31.170608711Z - ./cmd/webserver
[step-build] 2024-04-30T18:13:44.563334902Z
[step-build] 2024-04-30T18:13:44.563404532Z [Setting launch table]
[step-build] 2024-04-30T18:13:44.563417703Z Detected processes:
[step-build] 2024-04-30T18:13:44.563427021Z - hello: hello
[step-build] 2024-04-30T18:13:44.563437573Z - qrank-builder: qrank-builder
[step-build] 2024-04-30T18:13:44.563446463Z - webserver: webserver
[step-build] 2024-04-30T18:13:44.563488376Z - web: hello
[step-build] 2024-04-30T18:13:44.577068596Z
[step-build] 2024-04-30T18:13:44.577163104Z [Discovering process types]
[step-build] 2024-04-30T18:13:44.579933448Z Procfile declares types -> web,
qrank-builder, hello
The "web: hello" in the launch table looks very suspicious; it's not what
my Procfile states (see link above). But at least the build seems to have
been successful:
$ toolforge build show
Build ID: qrank-buildpacks-pipelinerun-vhcrt
Start Time: 2024-04-30T18:13:03Z
End Time: 2024-04-30T18:14:00Z
Status: ok
Message: Tasks Completed: 1 (Failed: 0, Cancelled 0), Skipped: 0
Parameters:
Source URL: https://github.com/brawer/wikidata-qrank
Ref: N/A
Envvars: N/A
Destination Image: tools-harbor.wmcloud.org/tool-qrank/tool-qrank:latest
Now, I'd be happy to just run the "hello" job and see its output log.
However, when I do this:
$ toolforge jobs run --command hello --image tool-qrank/tool-qrank:latest
--mount=all --filelog
The job runs for about five minutes (?!?!), and then:
$ cat hello.out
ERROR: failed to launch: bash exec: argument list too long
So clearly something is off... but what? How to run a golang tool on
Toolforge when using Build Service?
Thanks for any help,
— Sascha
Hello!
You may have been notified about Buster VM deprecation on projects where
you are not an admin.
This was in error.
Kindly ignore.
Thanks!
--
Seyram Komla Sapaty
Developer Advocate
Wikimedia Cloud Services
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Hello,
I'm trying to tunnel to my VPS Trove instance, so that I can test some code
against the production database.
I'm using the following:
ssh -N -L 4177:tdlqt33y3nt.svc.trove.eqiad1.wikimedia.cloud:3306
login.toolforge.org
I've also tried:
ssh -N -L 4177:tdlqt33y3nt.svc.trove.eqiad1.wikimedia.cloud:3306
mwcurator.mwoffliner.eqiad1.wikimedia.cloud
The initial connection seems to work, but in my application I'm getting
dropped connections. I see the following in the terminal where I set up the
tunnel:
channel 2: open failed: connect failed: Connection refused
channel 2: open failed: connect failed: Connection refused
I imagine this isn't a fully supported workflow but I'm wondering if
there's some way to get it to work?
Thanks,
-Travis
We’re going to keep both Gerrit and GitLab.
Many of the repositories now on Gerrit require features that are lacking in
GitLab:
- Cross-repository dependent merge requests
- Project gating and deterministic, atomic merges
- Stacked patchsets
- Multiple reviewers
Likewise, GitLab has features that our developers have come to depend on:
- Familiar fork, branch, and merge workflow
- Self-service repository creation
- Jupyter Notebook rendering
- GitLab's self-service CI/CD pipeline
- Bring your own runners
- Artifact and packages registries
As a result, we'll be running both systems for at least the next two years.
More information is available on MediaWiki.org[0], summarized below.
___
*Details*
These repositories have requirements that mean they must remain on Gerrit:
- MediaWiki core
- The subset of extensions and skins which track MediaWiki core’s
mainline branch (including all Wikimedia production-deployed extensions,
skins, and MediaWiki vendor)
- SRE’s Puppet repository along with dependencies and dependent
repositories.
Other repositories on GitLab may return to Gerrit to lessen the mental
burden of working with two systems.
Likewise, some repositories on Gerrit may wish to continue to migrate to
GitLab.
We'll use the Phabricator tag "GitLab (Project Migration)"[1] to track
requests from project stewards to migrate between systems. We’ll be
monitoring that workboard closely through the end of this calendar year to
assist developers with their migrations.
___
*What now*
- We have no intention of shutting down either system for the next two
years.
- We've posted a longer explanation on MediaWiki.org[0]. Please use the
talk page there (rather than this mailing list) for discussion.
- We've gathered a list of questions we anticipate some folks may have
on MediaWiki.org[2].
- Specific requests from project stewards to migrate repositories
between either of our systems should use the Phabricator tag "GitLab
(Project Migration)"[1].
- We'll be hosting office hours to answer any questions. More details
about these sessions will come later.
The decision to keep both systems was challenging. Having two code forges
adds to the fragmentation of our systems, the mental overhead for our
developers, and the maintenance burden of stewards. But each system is
well-suited to a subset of our needs, keeping both systems safeguards the
productivity of our developers and the stability of our systems.
For more details, I encourage you to review MediaWiki.org[0] and engage on
the talk page.
Tyler Cipriani (he/him)
Engineering Manager, Release Engineering
Wikimedia Foundation
[0]: <https://www.mediawiki.org/wiki/GitLab/Migration_status>
[1]: <https://phabricator.wikimedia.org/project/profile/5552/>
[2]: <https://www.mediawiki.org/wiki/GitLab/Migration_status/FAQ>
Hello Cloud users,
Just wanted to give a heads-up for an upcoming maintenance for our cloudelastic hosts <https://wikitech.wikimedia.org/wiki/Help:CirrusSearch_elasticsearch_replicas>. We will be migrating them to a new load balancer starting Thursday, Jun 20 at 1400 UTC . The service will be down through the maintenance, which is expected to last between 1 and 2 hours.
Fine details are available in this Phabricator task <https://phabricator.wikimedia.org/T367511> . If you have any questions about this, feel free to respond here or in IRC (see my nick below).
Best,
Brian King
SRE, Data Platform/Search Platform
Wikimedia Foundation
IRC: inflatador
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Hi all!
This is to let you know that Toolforge continuous jobs now support internal
domain names!
This means you can now make a request from any job type to a continuous job
using the name of the continuous job directly, without having to know or
keep track of its internal ephemeral IP address.
To use it you need to specify `--port <number>` while creating
your continuous job. Once done your continuous job can now be reached from
other jobs with `https://<continuous-job-name>:<port>`.
Note: This can only be configured for continuous jobs.
Note: The name of the continuous job automatically becomes the internal
domain name and there is currently no way to specify a custom name.
Also, a reminder that you can find this and smaller user-facing updates
about
the Toolforge platform features here:
https://wikitech.wikimedia.org/wiki/Portal:Toolforge/Changelog
Original task: https://phabricator.wikimedia.org/T348758
--
Ndibe Raymond Olisaemeka
Software Engineer - Technical Engagement
Wikimedia Foundation <https://wikimediafoundation.org/>
<https://wikimediafoundation.org>
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Hi,
I've added a new set of flavors on Cloud VPS.[0] The new flavors are
almost identical to the existing `g3` flavors - the only difference is
that instances on the new `g4` flavors get scheduled on hypervisors
running more modern networking software.[1] When creating new
instances, either by hand or via some automated means, please use the
new flavors from now on. Using g3 flavors for new instances will be
disabled in the coming days.
We will be migrating existing VMs to the new flavors at some point in
the near future. This will most likely involve briefly shutting down
each VM as it's migrated from one network agent to the other. There
will be a dedicated announcement for that once the timeline on that is
more certain. As always, if your project needs some coordination
before restarting things or this is otherwise a problem, feel free to
contact us.[2]
Migrating a g3 instance to a g4 flavor (or the other way around)
without manual intervention would instead make that instance
inaccessible. For that reason instance resizing has temporarily been
disabled, if you need to resize an existing instance for whatever
reason before it's been migrated please contact us.
[0]: https://wikitech.wikimedia.org/wiki/Help:Cloud_VPS_Instances#Instance_infor…
[1]: https://phabricator.wikimedia.org/T364458
[2]: https://wikitech.wikimedia.org/wiki/Help:Cloud_Services_communication
Taavi
--
Taavi Väänänen (he/him)
Site Reliability Engineer, Cloud Services
Wikimedia Foundation
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Cloud-vps users:
There are now a mere two weeks remaining before Debian Buster ends its
period of long term support. After June 30th, security upgrades will no
longer be available for this release and VMs running Buster will become
ever more risky and difficult to maintain.
As of today there are still 143 Buster servers running in our cloud[0]
-- some of them are probably yours! Please take some time to delete VMs
that are no longer needed, and rebuild those that are still needed with
a more modern release, ideally Debian Bookworm.
There is a task for your project on phabricator[1] where you can update
your progress. If you have vital VMs that you absolutely cannot rebuild
by July 15th, please update the associated task with your plan and
anticipated timeline. WMCS staff will start shutting down unacknowledged
VMs in mid July in order to attract the attention of users who do not
read email or follow phabricator.
Buster's end of life has been a long time coming, and frequently
announced. If you've been waiting for the right time to think about
this, the time is now.
Thank you!
-Andrew + WMCS staff
[0] https://os-deprecation.toolforge.org/
[1] https://phabricator.wikimedia.org/project/view/6373/
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Hello all,
The Code of Conduct Committee is a team of five trusted individuals (plus
five auxiliary members) with diverse affiliations responsible for general
enforcement of the Code of conduct for Wikimedia technical spaces.
Committee members are in charge of processing complaints, discussing with
the parties affected, agreeing on resolutions, and following up on their
enforcement. For more on their duties and roles, see
https://www.mediawiki.org/wiki/Code_of_Conduct/Committee.
This is a call for community members interested in volunteering for
appointment to this committee. Volunteers serving in this role should be
experienced Wikimedians or have had experience serving in a similar
position before.
The current committee is doing the selection and will research and discuss
candidates. Six weeks before the beginning of the next Committee term, they
will publish their candidate slate (a list of candidates) on-wiki. The
community can provide feedback on these candidates, via private email to
the group choosing the next Committee. The feedback period will be two
weeks. The current Committee will then either finalize the slate, or update
the candidate slate in response to concerns raised. If the candidate slate
changes, there will be another two week feedback period covering the newly
proposed members. After the selections are finalized, there will be a
training period, after which the new Committee is appointed. The current
Committee continues to serve until the feedback, selection, and training
process is complete.
If you are interested in serving on this committee or like to nominate a
candidate, please write an email to techconductcandidates AT wikimedia.org
with details of your experience on the projects, your thoughts on the code
of conduct and the committee and what you hope to bring to the role and
whether you have a preference in being auxiliary or main member of the
committee. The committee consists of five main members plus five auxiliary
members and they will serve for a year; all applications are appreciated
and will be carefully considered. The deadline for applications is *the end
of day on June 25, 2024*.
Please feel free to pass this invitation along to any users who you think
may be qualified and interested.
Best,
Amir Sarabadani, on behalf of the Code of Conduct Committee