Uygulamalarımızda nelere izin verilip nelere verilmediğini belirleyen ilkeler.
Reklam içerikleri ve işletme varlıkları için ilkeler.
Meta teknolojileri için geçerli olan diğer ilkeler.
İlkelerimizi nasıl güncellediğimiz, sonuçları nasıl ölçtüğümüz, başkalarıyla nasıl birlikte çalıştığımız ve daha fazlası.
Gençlerin Facebook ve Instagram'da emniyetli ve olumlu deneyimler yaşamasına nasıl yardımcı olduğumuzu keşfedin.
Tehlikeli örgütler ve kişiler konusuna yaklaşma biçimimiz.
Opioid salgını karşısında toplulukları desteklemek için yaptıklarımız.
Müdahalelerin önlenmesine, insanların oy kullanmaya teşvik edilmesine ve daha fazlasına nasıl yardımcı olduğumuz.
Yanlış bilgileri tespit etmek ve üzerlerinde işlem yapmak için bağımsız haber doğrulayıcılarla ve daha fazlasıyla çalışma biçimimiz.
İçeriklerin haber değeri taşıyıp taşımadığını değerlendirmek için yaptıklarımız.
Haber Kaynağı'ndaki sorunlu içerikleri nasıl azalttığımız.
Yapay zeka sistemlerini geliştirmek için yaptıklarımız.
Facebook ve Instagram'daki herkese açık verilere kapsamlı erişim
Şu anda Meta teknolojileri genelinde yayında olan tüm reklamların kapsamlı ve arama yapılabilir veritabanı
Meta teknolojileri ve programları üzerinde derinlemesine araştırmalar yürütmek için ek araçlar
Facebook uygulamasında ve Instagram'da ilkelerimizi uygulama performansımıza ilişkin üç aylık rapor.
Kişilerin fikri mülkiyetlerini korumasına yardımcı olma konusunda ne kadar başarılı olduğumuza dair rapor.
Devletlerin kişisel veri taleplerine dair rapor.
Yerel yasaları ihlal ettiği gerekçe gösterilerek bize şikayet edilen içerikleri ne zaman kısıtladığımıza dair rapor.
İnsanların internete erişimini sınırlayan kasıtlı internet kısıtlamalarına dair rapor.
Çeyrek boyunca en geniş dağıtımı alan içerik de dahil olmak üzere, insanların Facebook'ta gördükleri içeriklerle ilgili üç aylık rapor.
Facebook ve Instagram için güncel ve geçmiş yasal düzenleme raporlarını indirin.
Bu içerik henüz Turkish [Türkçe] içinde mevcut değil
Community Forums bring together groups of people from all over the world to discuss tough issues, consider hard choices, and share their perspectives to improve the experiences people have across Meta’s technologies.
Meta’s Community Forums allow us to learn directly from the people who use our platforms and technologies. These forums bring together thousands of people from around the world to weigh in on some of the tech industry’s toughest questions.
In each Community Forum, participants start by learning about a specific topic through carefully prepared educational materials. They then join small group discussions where they share their experiences and perspectives. Expert advisors are then available to answer questions before participants provide their final feedback through surveys.
Their responses, and the analysis of the results, produce insights on the public’s understanding of and concerns about these emerging technologies, and ultimately inform the development of our products and policies. For example, our Community Forum on the Metaverse played a direct role in Meta adding mute assist, a form of automatic speech detection in public worlds, to the catalog of tools available to creators on Horizon. We invest in Community Forums because it’s important that our products represent the people who use them.
We started by looking at deliberative democratic mechanisms, such as Citizens Assemblies, that have been used to provide public input into government policies for years. An initial pilot was run on our approach to climate misinformation in 2022. Based on those learnings we explored how we might scale this approach to more people, and launched another Forum on the issue of bullying and harassment in the Metaverse. Both of these showed that Community Forums can provide rich insights for our product and policy development.
As with the collaborative nature of this work, Meta has spoken to and partnered with a variety of deliberative democracy experts, civil society organizations, government policymakers, and academics to ensure our forums are constructed in accordance with deliberative democracy best practices and standards. This process helps us mitigate against any biases while also sharing insights with others in the deliberative democracy community. The design and execution of our Forums to-date have been in partnership with Stanford's Deliberative Democracy Lab and the Behavioural Insights Team.
Title | Dates of Forum | Countries | Representative Sample | Results | Guiding Questions |
---|---|---|---|---|---|
2024 Community Forum on Generative AI | October 2024 | India, Turkey, Nigeria, Saudi Arabia, South Africa | 887 | “How should AI agents provide proactive, personalized experiences to users?” "How should AI agents and users interact?" | |
2023 Community Forum on Generative AI | October 2023 | Brazil, Germany, Spain, United States | 1,545 | “What principles should guide generative AI’s engagement with users?” | |
Community Forum on Bullying and Harassment in the Metaverse | December 2022 | 32 countries | 6,488 | “To what extent do the platform owners, such as Meta, have a responsibility to act to protect against bullying and harassment, particularly since the metaverse is an immersive reality in which bullying and harassment may have severe consequences?” | |
Pilot Community Forum on Climate Misinfo | 2022 | Brazil, France, India, Nigeria, United States | 257 | “What approach should Meta take to climate content that may be misleading or confusing but does not contain a false claim that can be debunked by fact-checkers?” |
We will periodically update the impact that each of our Forums is having on our decisions over time.
Our 2024 Community Forum on Generative AI
In October 2024, we conducted a second Community Forum on GenAI which included a total of approximately 900 members of the public across Nigeria, South Africa, Saudi Arabia, and Turkey. This Forum solicited feedback from Meta’s users on how people should interact with AI agents and whether they would prefer these interactions to be more personalized. The participants explored themes such as optimism towards AI in the Global South and preferences around cultural norms.
The Forum resulted in several key findings on the principles that should underpin AI agents, including:
Participants supported AI agents remembering their prior conversations to personalize their experience, as long as transparency and user controls are in place.
Participants were more supportive of culturally/regionally-tailored AI agents compared to standardized AI agents.
Participants supported proactive prompting with clarifying questions to generate personalized outputs from AI agents.
For more details, view the full report from our partners at Stanford here.
Our 2023 Community Forum on Generative AI
We announced our 2023 Community Forum on GenAI shortly after the technology began to capture the public mind because we wanted to better understand the underlying principles that should inform how chatbots provide guidance and advice, as well as how they should interact with people. This forum included a total of 1,545 participants from Brazil, Germany, Spain and the United States. The results of the Forum have helped shape some of the most foundational choices to date in our design of GenAI products and will continue to inform the direction of Generative AI products.
This Forum informed our strategy on how, and under what circumstances, chatbots can be personalized for users and what information they can remember to support the user experience.
For example, feedback from the public showed that participants preferred user controls that enable them to personalize experiences and clear transparency about what data was being used. This has guided features associated with transparency and disclosure of GenAI activities, such as user notifications.
In addition, the findings from the Forum informed company strategy on how to approach issues of AI chatbot memory in ways that are consistent with user preferences. For example, there was an overwhelming consensus that, as long as users are made aware, AI chatbots should use people’s past conversations to offer the best experience.
The Forum also helped a team of design and product experts decide to have a more neutral user experience as a default setting for AI agents to start with. Participants highlighted the importance of considering vulnerable users in the development of AI, which has informed our product strategy.
Given the rapidly changing nature of GenAI technologies, we purposely designed the 2023 Community Forum to provide us with direction at the foundational level which focuses on the values that underlie how our technology interacts with people. As a result, we can return to this input over time and it can provide lasting direction which scales to a large number of product decisions we have to make. We are committed to retaining this feedback and considering it in conjunction with the input we receive from subsequent Community Forums on these issues.
Our Community Forum on Bullying and Harassment in the Metaverse
A Community Forum on the Metaverse was conducted in collaboration with Stanford’s Deliberative Democracy Lab on the topic of bullying and harassment and was a first-of-its-kind experiment in global deliberation. We chose to focus on closed virtual spaces so that the forum could advise on policy and product development for virtual experiences such as Horizon Worlds. This Forum included 6,000 participants from 32 countries and functioned as an important pilot to establish proof of concept. Read more here.
Our Community Forum on Climate Misinformation
This Forum deliberated on the challenging topic of misleading climate change content. We brought together over 250 Facebook users across five countries, to ensure that we heard from people from different nationalities, ethnicities, socio-economic backgrounds, and political ideologies. This Forum functioned as an important pilot to establish proof of concept. Read more here.
Teams across Meta coordinate to determine significant and difficult innovation questions that would benefit from community consultation.
At Meta, we see these types of processes as a valuable way to meaningfully engage the public on complex issues. We consider a topic as appropriate for deliberation when it is:
Significant: it may be an important issue for society and technology.
Difficult: it poses clear dilemmas, tough tradeoffs or a lot of people are likely to have varying perspectives.
And has multiple solutions.
To ensure quality deliberation and credible results, it is critical to get a true cross-section of communities to participate in our Community Forums.
We work with Stanford University’s Deliberative Democracy Lab and polling firms to recruit a representative sample for each country we host Forums in.
To reduce barriers to participation, participants are supported with access to technology, internet connection, and childcare as needed.
Objective, unbiased education is a key component so that participants can effectively grapple with the competing tradeoffs associated with the Forum’s topic. We develop materials with Stanford’s Deliberative Democracy Lab and other outside experts to ensure that e all participants have equal access to a baseline knowledge of the topic and the ability to engage in deliberation – regardless of their background.
Participants are guided by Stanford’s AI-facilitated platform to deliberate on the Forum’s topics, taking turns speaking and sharing their opinions with fellow participants.
Deliberation encourages people to reflect on the education materials, their own lived experience, and the perspectives shared by others.
In between deliberations with their peers, participants attend a question and answer panel with industry experts. The panelists are responsible for clarifying participants’ understanding of contested issues, presenting novel tradeoffs on the topics participants are discussing, and correcting any misinformation that may have come up during deliberation.
As a part of Stanford’s deliberative poll methodology, participants provide their perspectives on the innovation topics before and after the Forum.
We receive these survey results, alongside key themes that emerged from the participants’ small group discussions.
Taken together, the public is able to provide more direct input into our innovation questions that considers the complexity of the topic.
The final results report from our Forums are released publicly by Stanford’s Deliberative Democracy Lab.
We take the findings from the Community Forum and collaborate with teams across Meta to inform product and policy decisions with the feedback we’ve received from the public.
Our Community Forums often address long-term innovation questions. As such, the implementation of the public input can take course over time and influence multiple different decisions. We publish updates on our implementation progress here on our Transparency Center.
Tehlikeli Örgütler ve Kişiler Konusunda Benimsediğimiz Yaklaşım
Seçimler konusunda benimsediğimiz yaklaşım
Yanlış bilgiler konusunda benimsediğimiz yaklaşım
Haber değeri taşıyan içerikler konusunda benimsediğimiz yaklaşım
Facebook Akış sıralaması konusunda benimsediğimiz yaklaşım
Sıralamayı açıklama konusunda benimsediğimiz yaklaşım